Test Report: KVM_Linux_crio 19389

                    
                      4e9c16444aca391b349fd87cc48c80a0a38d518e:2024-08-07:35690
                    
                

Test fail (12/215)

x
+
TestAddons/Setup (2400.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-533488 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-533488 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.956330482s)

                                                
                                                
-- stdout --
	* [addons-533488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-533488" primary control-plane node in "addons-533488" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	  - Using image docker.io/busybox:stable
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	* Verifying registry addon...
	* Verifying ingress addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-533488 service yakd-dashboard -n yakd-dashboard
	
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-533488 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: ingress-dns, nvidia-device-plugin, metrics-server, storage-provisioner, helm-tiller, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 17:36:59.664618   29086 out.go:291] Setting OutFile to fd 1 ...
	I0807 17:36:59.664741   29086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:36:59.664752   29086 out.go:304] Setting ErrFile to fd 2...
	I0807 17:36:59.664759   29086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:36:59.664964   29086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 17:36:59.665622   29086 out.go:298] Setting JSON to false
	I0807 17:36:59.666498   29086 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4766,"bootTime":1723047454,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 17:36:59.666559   29086 start.go:139] virtualization: kvm guest
	I0807 17:36:59.669083   29086 out.go:177] * [addons-533488] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 17:36:59.670915   29086 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 17:36:59.670950   29086 notify.go:220] Checking for updates...
	I0807 17:36:59.674000   29086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:36:59.675426   29086 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 17:36:59.676777   29086 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 17:36:59.678082   29086 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 17:36:59.679556   29086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 17:36:59.681092   29086 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:36:59.712513   29086 out.go:177] * Using the kvm2 driver based on user configuration
	I0807 17:36:59.714013   29086 start.go:297] selected driver: kvm2
	I0807 17:36:59.714024   29086 start.go:901] validating driver "kvm2" against <nil>
	I0807 17:36:59.714034   29086 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 17:36:59.714713   29086 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:36:59.714794   29086 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 17:36:59.729770   29086 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 17:36:59.729823   29086 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 17:36:59.730053   29086 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 17:36:59.730078   29086 cni.go:84] Creating CNI manager for ""
	I0807 17:36:59.730085   29086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 17:36:59.730096   29086 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 17:36:59.730163   29086 start.go:340] cluster config:
	{Name:addons-533488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-533488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:36:59.730304   29086 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:36:59.732171   29086 out.go:177] * Starting "addons-533488" primary control-plane node in "addons-533488" cluster
	I0807 17:36:59.733572   29086 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 17:36:59.733603   29086 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 17:36:59.733610   29086 cache.go:56] Caching tarball of preloaded images
	I0807 17:36:59.733689   29086 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 17:36:59.733708   29086 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 17:36:59.733995   29086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/config.json ...
	I0807 17:36:59.734015   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/config.json: {Name:mke44dc90200b35239d4bc820921ed82f626a205 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:36:59.734164   29086 start.go:360] acquireMachinesLock for addons-533488: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 17:36:59.734226   29086 start.go:364] duration metric: took 45.607µs to acquireMachinesLock for "addons-533488"
	I0807 17:36:59.734247   29086 start.go:93] Provisioning new machine with config: &{Name:addons-533488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-533488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 17:36:59.734309   29086 start.go:125] createHost starting for "" (driver="kvm2")
	I0807 17:36:59.735955   29086 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0807 17:36:59.736104   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:36:59.736166   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:36:59.750315   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0807 17:36:59.750716   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:36:59.751244   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:36:59.751269   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:36:59.751697   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:36:59.751886   29086 main.go:141] libmachine: (addons-533488) Calling .GetMachineName
	I0807 17:36:59.752018   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:36:59.752218   29086 start.go:159] libmachine.API.Create for "addons-533488" (driver="kvm2")
	I0807 17:36:59.752248   29086 client.go:168] LocalClient.Create starting
	I0807 17:36:59.752295   29086 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem
	I0807 17:36:59.875980   29086 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem
	I0807 17:36:59.998431   29086 main.go:141] libmachine: Running pre-create checks...
	I0807 17:36:59.998457   29086 main.go:141] libmachine: (addons-533488) Calling .PreCreateCheck
	I0807 17:36:59.998954   29086 main.go:141] libmachine: (addons-533488) Calling .GetConfigRaw
	I0807 17:36:59.999475   29086 main.go:141] libmachine: Creating machine...
	I0807 17:36:59.999491   29086 main.go:141] libmachine: (addons-533488) Calling .Create
	I0807 17:36:59.999628   29086 main.go:141] libmachine: (addons-533488) Creating KVM machine...
	I0807 17:37:00.001107   29086 main.go:141] libmachine: (addons-533488) DBG | found existing default KVM network
	I0807 17:37:00.001850   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:00.001664   29108 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0807 17:37:00.001885   29086 main.go:141] libmachine: (addons-533488) DBG | created network xml: 
	I0807 17:37:00.001912   29086 main.go:141] libmachine: (addons-533488) DBG | <network>
	I0807 17:37:00.001934   29086 main.go:141] libmachine: (addons-533488) DBG |   <name>mk-addons-533488</name>
	I0807 17:37:00.001949   29086 main.go:141] libmachine: (addons-533488) DBG |   <dns enable='no'/>
	I0807 17:37:00.001957   29086 main.go:141] libmachine: (addons-533488) DBG |   
	I0807 17:37:00.001972   29086 main.go:141] libmachine: (addons-533488) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0807 17:37:00.001985   29086 main.go:141] libmachine: (addons-533488) DBG |     <dhcp>
	I0807 17:37:00.001998   29086 main.go:141] libmachine: (addons-533488) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0807 17:37:00.002016   29086 main.go:141] libmachine: (addons-533488) DBG |     </dhcp>
	I0807 17:37:00.002031   29086 main.go:141] libmachine: (addons-533488) DBG |   </ip>
	I0807 17:37:00.002042   29086 main.go:141] libmachine: (addons-533488) DBG |   
	I0807 17:37:00.002058   29086 main.go:141] libmachine: (addons-533488) DBG | </network>
	I0807 17:37:00.002070   29086 main.go:141] libmachine: (addons-533488) DBG | 
	I0807 17:37:00.007838   29086 main.go:141] libmachine: (addons-533488) DBG | trying to create private KVM network mk-addons-533488 192.168.39.0/24...
	I0807 17:37:00.075447   29086 main.go:141] libmachine: (addons-533488) DBG | private KVM network mk-addons-533488 192.168.39.0/24 created
	I0807 17:37:00.075482   29086 main.go:141] libmachine: (addons-533488) Setting up store path in /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488 ...
	I0807 17:37:00.075543   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:00.075408   29108 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 17:37:00.075580   29086 main.go:141] libmachine: (addons-533488) Building disk image from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 17:37:00.075610   29086 main.go:141] libmachine: (addons-533488) Downloading /home/jenkins/minikube-integration/19389-20864/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 17:37:00.350554   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:00.350420   29108 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa...
	I0807 17:37:00.524804   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:00.524656   29108 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/addons-533488.rawdisk...
	I0807 17:37:00.524845   29086 main.go:141] libmachine: (addons-533488) DBG | Writing magic tar header
	I0807 17:37:00.524931   29086 main.go:141] libmachine: (addons-533488) DBG | Writing SSH key tar header
	I0807 17:37:00.524967   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:00.524822   29108 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488 ...
	I0807 17:37:00.524985   29086 main.go:141] libmachine: (addons-533488) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488 (perms=drwx------)
	I0807 17:37:00.525000   29086 main.go:141] libmachine: (addons-533488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488
	I0807 17:37:00.525010   29086 main.go:141] libmachine: (addons-533488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines
	I0807 17:37:00.525018   29086 main.go:141] libmachine: (addons-533488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 17:37:00.525033   29086 main.go:141] libmachine: (addons-533488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864
	I0807 17:37:00.525045   29086 main.go:141] libmachine: (addons-533488) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines (perms=drwxr-xr-x)
	I0807 17:37:00.525054   29086 main.go:141] libmachine: (addons-533488) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0807 17:37:00.525064   29086 main.go:141] libmachine: (addons-533488) DBG | Checking permissions on dir: /home/jenkins
	I0807 17:37:00.525073   29086 main.go:141] libmachine: (addons-533488) DBG | Checking permissions on dir: /home
	I0807 17:37:00.525085   29086 main.go:141] libmachine: (addons-533488) DBG | Skipping /home - not owner
	I0807 17:37:00.525101   29086 main.go:141] libmachine: (addons-533488) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube (perms=drwxr-xr-x)
	I0807 17:37:00.525116   29086 main.go:141] libmachine: (addons-533488) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864 (perms=drwxrwxr-x)
	I0807 17:37:00.525129   29086 main.go:141] libmachine: (addons-533488) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0807 17:37:00.525140   29086 main.go:141] libmachine: (addons-533488) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0807 17:37:00.525152   29086 main.go:141] libmachine: (addons-533488) Creating domain...
	I0807 17:37:00.526036   29086 main.go:141] libmachine: (addons-533488) define libvirt domain using xml: 
	I0807 17:37:00.526066   29086 main.go:141] libmachine: (addons-533488) <domain type='kvm'>
	I0807 17:37:00.526077   29086 main.go:141] libmachine: (addons-533488)   <name>addons-533488</name>
	I0807 17:37:00.526085   29086 main.go:141] libmachine: (addons-533488)   <memory unit='MiB'>4000</memory>
	I0807 17:37:00.526101   29086 main.go:141] libmachine: (addons-533488)   <vcpu>2</vcpu>
	I0807 17:37:00.526114   29086 main.go:141] libmachine: (addons-533488)   <features>
	I0807 17:37:00.526123   29086 main.go:141] libmachine: (addons-533488)     <acpi/>
	I0807 17:37:00.526130   29086 main.go:141] libmachine: (addons-533488)     <apic/>
	I0807 17:37:00.526135   29086 main.go:141] libmachine: (addons-533488)     <pae/>
	I0807 17:37:00.526143   29086 main.go:141] libmachine: (addons-533488)     
	I0807 17:37:00.526148   29086 main.go:141] libmachine: (addons-533488)   </features>
	I0807 17:37:00.526155   29086 main.go:141] libmachine: (addons-533488)   <cpu mode='host-passthrough'>
	I0807 17:37:00.526160   29086 main.go:141] libmachine: (addons-533488)   
	I0807 17:37:00.526168   29086 main.go:141] libmachine: (addons-533488)   </cpu>
	I0807 17:37:00.526176   29086 main.go:141] libmachine: (addons-533488)   <os>
	I0807 17:37:00.526186   29086 main.go:141] libmachine: (addons-533488)     <type>hvm</type>
	I0807 17:37:00.526218   29086 main.go:141] libmachine: (addons-533488)     <boot dev='cdrom'/>
	I0807 17:37:00.526243   29086 main.go:141] libmachine: (addons-533488)     <boot dev='hd'/>
	I0807 17:37:00.526251   29086 main.go:141] libmachine: (addons-533488)     <bootmenu enable='no'/>
	I0807 17:37:00.526256   29086 main.go:141] libmachine: (addons-533488)   </os>
	I0807 17:37:00.526261   29086 main.go:141] libmachine: (addons-533488)   <devices>
	I0807 17:37:00.526266   29086 main.go:141] libmachine: (addons-533488)     <disk type='file' device='cdrom'>
	I0807 17:37:00.526276   29086 main.go:141] libmachine: (addons-533488)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/boot2docker.iso'/>
	I0807 17:37:00.526285   29086 main.go:141] libmachine: (addons-533488)       <target dev='hdc' bus='scsi'/>
	I0807 17:37:00.526290   29086 main.go:141] libmachine: (addons-533488)       <readonly/>
	I0807 17:37:00.526296   29086 main.go:141] libmachine: (addons-533488)     </disk>
	I0807 17:37:00.526302   29086 main.go:141] libmachine: (addons-533488)     <disk type='file' device='disk'>
	I0807 17:37:00.526311   29086 main.go:141] libmachine: (addons-533488)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0807 17:37:00.526319   29086 main.go:141] libmachine: (addons-533488)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/addons-533488.rawdisk'/>
	I0807 17:37:00.526326   29086 main.go:141] libmachine: (addons-533488)       <target dev='hda' bus='virtio'/>
	I0807 17:37:00.526332   29086 main.go:141] libmachine: (addons-533488)     </disk>
	I0807 17:37:00.526341   29086 main.go:141] libmachine: (addons-533488)     <interface type='network'>
	I0807 17:37:00.526363   29086 main.go:141] libmachine: (addons-533488)       <source network='mk-addons-533488'/>
	I0807 17:37:00.526383   29086 main.go:141] libmachine: (addons-533488)       <model type='virtio'/>
	I0807 17:37:00.526395   29086 main.go:141] libmachine: (addons-533488)     </interface>
	I0807 17:37:00.526407   29086 main.go:141] libmachine: (addons-533488)     <interface type='network'>
	I0807 17:37:00.526419   29086 main.go:141] libmachine: (addons-533488)       <source network='default'/>
	I0807 17:37:00.526430   29086 main.go:141] libmachine: (addons-533488)       <model type='virtio'/>
	I0807 17:37:00.526440   29086 main.go:141] libmachine: (addons-533488)     </interface>
	I0807 17:37:00.526454   29086 main.go:141] libmachine: (addons-533488)     <serial type='pty'>
	I0807 17:37:00.526467   29086 main.go:141] libmachine: (addons-533488)       <target port='0'/>
	I0807 17:37:00.526478   29086 main.go:141] libmachine: (addons-533488)     </serial>
	I0807 17:37:00.526488   29086 main.go:141] libmachine: (addons-533488)     <console type='pty'>
	I0807 17:37:00.526514   29086 main.go:141] libmachine: (addons-533488)       <target type='serial' port='0'/>
	I0807 17:37:00.526523   29086 main.go:141] libmachine: (addons-533488)     </console>
	I0807 17:37:00.526534   29086 main.go:141] libmachine: (addons-533488)     <rng model='virtio'>
	I0807 17:37:00.526550   29086 main.go:141] libmachine: (addons-533488)       <backend model='random'>/dev/random</backend>
	I0807 17:37:00.526565   29086 main.go:141] libmachine: (addons-533488)     </rng>
	I0807 17:37:00.526576   29086 main.go:141] libmachine: (addons-533488)     
	I0807 17:37:00.526584   29086 main.go:141] libmachine: (addons-533488)     
	I0807 17:37:00.526594   29086 main.go:141] libmachine: (addons-533488)   </devices>
	I0807 17:37:00.526605   29086 main.go:141] libmachine: (addons-533488) </domain>
	I0807 17:37:00.526611   29086 main.go:141] libmachine: (addons-533488) 
	I0807 17:37:00.532448   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:58:47:d5 in network default
	I0807 17:37:00.533057   29086 main.go:141] libmachine: (addons-533488) Ensuring networks are active...
	I0807 17:37:00.533074   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:00.533810   29086 main.go:141] libmachine: (addons-533488) Ensuring network default is active
	I0807 17:37:00.534190   29086 main.go:141] libmachine: (addons-533488) Ensuring network mk-addons-533488 is active
	I0807 17:37:00.534615   29086 main.go:141] libmachine: (addons-533488) Getting domain xml...
	I0807 17:37:00.535324   29086 main.go:141] libmachine: (addons-533488) Creating domain...
	I0807 17:37:01.945346   29086 main.go:141] libmachine: (addons-533488) Waiting to get IP...
	I0807 17:37:01.946077   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:01.946528   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:01.946566   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:01.946512   29108 retry.go:31] will retry after 245.10662ms: waiting for machine to come up
	I0807 17:37:02.192930   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:02.193338   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:02.193360   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:02.193298   29108 retry.go:31] will retry after 348.908734ms: waiting for machine to come up
	I0807 17:37:02.543911   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:02.544352   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:02.544383   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:02.544302   29108 retry.go:31] will retry after 366.612664ms: waiting for machine to come up
	I0807 17:37:02.912839   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:02.913266   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:02.913296   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:02.913214   29108 retry.go:31] will retry after 600.50171ms: waiting for machine to come up
	I0807 17:37:03.514925   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:03.515419   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:03.515449   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:03.515359   29108 retry.go:31] will retry after 583.071003ms: waiting for machine to come up
	I0807 17:37:04.100061   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:04.100495   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:04.100522   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:04.100447   29108 retry.go:31] will retry after 749.302325ms: waiting for machine to come up
	I0807 17:37:04.851021   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:04.851433   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:04.851460   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:04.851401   29108 retry.go:31] will retry after 1.048750268s: waiting for machine to come up
	I0807 17:37:05.901543   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:05.901869   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:05.901891   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:05.901827   29108 retry.go:31] will retry after 1.450103883s: waiting for machine to come up
	I0807 17:37:07.354536   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:07.355039   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:07.355080   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:07.355006   29108 retry.go:31] will retry after 1.289391403s: waiting for machine to come up
	I0807 17:37:08.646546   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:08.646945   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:08.646983   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:08.646902   29108 retry.go:31] will retry after 1.625399857s: waiting for machine to come up
	I0807 17:37:10.273603   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:10.274091   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:10.274128   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:10.274029   29108 retry.go:31] will retry after 1.767228279s: waiting for machine to come up
	I0807 17:37:12.044029   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:12.044425   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:12.044462   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:12.044385   29108 retry.go:31] will retry after 2.237546862s: waiting for machine to come up
	I0807 17:37:14.284760   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:14.285206   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:14.285228   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:14.285170   29108 retry.go:31] will retry after 4.232910306s: waiting for machine to come up
	I0807 17:37:18.522064   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:18.522418   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find current IP address of domain addons-533488 in network mk-addons-533488
	I0807 17:37:18.522436   29086 main.go:141] libmachine: (addons-533488) DBG | I0807 17:37:18.522356   29108 retry.go:31] will retry after 5.208836617s: waiting for machine to come up
	I0807 17:37:23.735590   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:23.736060   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has current primary IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:23.736098   29086 main.go:141] libmachine: (addons-533488) Found IP for machine: 192.168.39.180
	I0807 17:37:23.736120   29086 main.go:141] libmachine: (addons-533488) Reserving static IP address...
	I0807 17:37:23.736441   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find host DHCP lease matching {name: "addons-533488", mac: "52:54:00:17:25:52", ip: "192.168.39.180"} in network mk-addons-533488
	I0807 17:37:23.807031   29086 main.go:141] libmachine: (addons-533488) DBG | Getting to WaitForSSH function...
	I0807 17:37:23.807059   29086 main.go:141] libmachine: (addons-533488) Reserved static IP address: 192.168.39.180
	I0807 17:37:23.807071   29086 main.go:141] libmachine: (addons-533488) Waiting for SSH to be available...
	I0807 17:37:23.810087   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:23.810504   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488
	I0807 17:37:23.810532   29086 main.go:141] libmachine: (addons-533488) DBG | unable to find defined IP address of network mk-addons-533488 interface with MAC address 52:54:00:17:25:52
	I0807 17:37:23.810677   29086 main.go:141] libmachine: (addons-533488) DBG | Using SSH client type: external
	I0807 17:37:23.810703   29086 main.go:141] libmachine: (addons-533488) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa (-rw-------)
	I0807 17:37:23.810726   29086 main.go:141] libmachine: (addons-533488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 17:37:23.810736   29086 main.go:141] libmachine: (addons-533488) DBG | About to run SSH command:
	I0807 17:37:23.810749   29086 main.go:141] libmachine: (addons-533488) DBG | exit 0
	I0807 17:37:23.814356   29086 main.go:141] libmachine: (addons-533488) DBG | SSH cmd err, output: exit status 255: 
	I0807 17:37:23.814375   29086 main.go:141] libmachine: (addons-533488) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0807 17:37:23.814384   29086 main.go:141] libmachine: (addons-533488) DBG | command : exit 0
	I0807 17:37:23.814388   29086 main.go:141] libmachine: (addons-533488) DBG | err     : exit status 255
	I0807 17:37:23.814395   29086 main.go:141] libmachine: (addons-533488) DBG | output  : 
	I0807 17:37:26.816083   29086 main.go:141] libmachine: (addons-533488) DBG | Getting to WaitForSSH function...
	I0807 17:37:26.818753   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:26.819259   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:26.819299   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:26.819418   29086 main.go:141] libmachine: (addons-533488) DBG | Using SSH client type: external
	I0807 17:37:26.819446   29086 main.go:141] libmachine: (addons-533488) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa (-rw-------)
	I0807 17:37:26.819468   29086 main.go:141] libmachine: (addons-533488) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.180 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 17:37:26.819481   29086 main.go:141] libmachine: (addons-533488) DBG | About to run SSH command:
	I0807 17:37:26.819492   29086 main.go:141] libmachine: (addons-533488) DBG | exit 0
	I0807 17:37:26.944283   29086 main.go:141] libmachine: (addons-533488) DBG | SSH cmd err, output: <nil>: 
	I0807 17:37:26.944604   29086 main.go:141] libmachine: (addons-533488) KVM machine creation complete!
	I0807 17:37:26.944887   29086 main.go:141] libmachine: (addons-533488) Calling .GetConfigRaw
	I0807 17:37:26.945419   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:26.945605   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:26.945766   29086 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 17:37:26.945782   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:26.946963   29086 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 17:37:26.946982   29086 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 17:37:26.946988   29086 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 17:37:26.946993   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:26.949208   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:26.949486   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:26.949512   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:26.949667   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:26.949844   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:26.950095   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:26.950238   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:26.950449   29086 main.go:141] libmachine: Using SSH client type: native
	I0807 17:37:26.950644   29086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0807 17:37:26.950659   29086 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 17:37:27.055343   29086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:37:27.055367   29086 main.go:141] libmachine: Detecting the provisioner...
	I0807 17:37:27.055376   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:27.058239   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.058675   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:27.058704   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.058847   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:27.059109   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.059285   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.059442   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:27.059584   29086 main.go:141] libmachine: Using SSH client type: native
	I0807 17:37:27.059725   29086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0807 17:37:27.059735   29086 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 17:37:27.169071   29086 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 17:37:27.169167   29086 main.go:141] libmachine: found compatible host: buildroot
	I0807 17:37:27.169183   29086 main.go:141] libmachine: Provisioning with buildroot...
	I0807 17:37:27.169195   29086 main.go:141] libmachine: (addons-533488) Calling .GetMachineName
	I0807 17:37:27.169478   29086 buildroot.go:166] provisioning hostname "addons-533488"
	I0807 17:37:27.169506   29086 main.go:141] libmachine: (addons-533488) Calling .GetMachineName
	I0807 17:37:27.169693   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:27.172178   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.172515   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:27.172540   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.172748   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:27.172938   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.173109   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.173252   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:27.173452   29086 main.go:141] libmachine: Using SSH client type: native
	I0807 17:37:27.173643   29086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0807 17:37:27.173662   29086 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-533488 && echo "addons-533488" | sudo tee /etc/hostname
	I0807 17:37:27.294851   29086 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-533488
	
	I0807 17:37:27.294873   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:27.297537   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.298004   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:27.298030   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.298221   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:27.298410   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.298563   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.298679   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:27.298856   29086 main.go:141] libmachine: Using SSH client type: native
	I0807 17:37:27.299068   29086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0807 17:37:27.299085   29086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-533488' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-533488/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-533488' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 17:37:27.417583   29086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 17:37:27.417608   29086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 17:37:27.417643   29086 buildroot.go:174] setting up certificates
	I0807 17:37:27.417654   29086 provision.go:84] configureAuth start
	I0807 17:37:27.417666   29086 main.go:141] libmachine: (addons-533488) Calling .GetMachineName
	I0807 17:37:27.417992   29086 main.go:141] libmachine: (addons-533488) Calling .GetIP
	I0807 17:37:27.420589   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.421016   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:27.421043   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.421176   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:27.423318   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.423656   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:27.423675   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.423804   29086 provision.go:143] copyHostCerts
	I0807 17:37:27.423883   29086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 17:37:27.424012   29086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 17:37:27.424074   29086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 17:37:27.424123   29086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.addons-533488 san=[127.0.0.1 192.168.39.180 addons-533488 localhost minikube]
	I0807 17:37:27.522619   29086 provision.go:177] copyRemoteCerts
	I0807 17:37:27.522671   29086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 17:37:27.522692   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:27.525199   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.525500   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:27.525525   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.525706   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:27.525881   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.526048   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:27.526183   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:27.610480   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 17:37:27.634232   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 17:37:27.657878   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 17:37:27.681482   29086 provision.go:87] duration metric: took 263.812417ms to configureAuth
	I0807 17:37:27.681512   29086 buildroot.go:189] setting minikube options for container-runtime
	I0807 17:37:27.681692   29086 config.go:182] Loaded profile config "addons-533488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 17:37:27.681759   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:27.684333   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.684671   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:27.684702   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.684850   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:27.685031   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.685185   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.685301   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:27.685442   29086 main.go:141] libmachine: Using SSH client type: native
	I0807 17:37:27.685654   29086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0807 17:37:27.685670   29086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 17:37:27.945573   29086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 17:37:27.945609   29086 main.go:141] libmachine: Checking connection to Docker...
	I0807 17:37:27.945616   29086 main.go:141] libmachine: (addons-533488) Calling .GetURL
	I0807 17:37:27.946804   29086 main.go:141] libmachine: (addons-533488) DBG | Using libvirt version 6000000
	I0807 17:37:27.948742   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.949051   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:27.949077   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.949282   29086 main.go:141] libmachine: Docker is up and running!
	I0807 17:37:27.949298   29086 main.go:141] libmachine: Reticulating splines...
	I0807 17:37:27.949306   29086 client.go:171] duration metric: took 28.197048046s to LocalClient.Create
	I0807 17:37:27.949334   29086 start.go:167] duration metric: took 28.197133095s to libmachine.API.Create "addons-533488"
	I0807 17:37:27.949345   29086 start.go:293] postStartSetup for "addons-533488" (driver="kvm2")
	I0807 17:37:27.949359   29086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 17:37:27.949392   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:27.949637   29086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 17:37:27.949660   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:27.951589   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.951937   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:27.951955   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:27.952155   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:27.952352   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:27.952547   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:27.952717   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:28.034130   29086 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 17:37:28.038249   29086 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 17:37:28.038272   29086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 17:37:28.038334   29086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 17:37:28.038358   29086 start.go:296] duration metric: took 89.00682ms for postStartSetup
	I0807 17:37:28.038387   29086 main.go:141] libmachine: (addons-533488) Calling .GetConfigRaw
	I0807 17:37:28.038891   29086 main.go:141] libmachine: (addons-533488) Calling .GetIP
	I0807 17:37:28.041531   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.041816   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:28.041843   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.042049   29086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/config.json ...
	I0807 17:37:28.042228   29086 start.go:128] duration metric: took 28.30790909s to createHost
	I0807 17:37:28.042251   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:28.044504   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.044865   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:28.044889   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.045025   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:28.045198   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:28.045366   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:28.045494   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:28.045637   29086 main.go:141] libmachine: Using SSH client type: native
	I0807 17:37:28.045829   29086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0807 17:37:28.045845   29086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0807 17:37:28.152832   29086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723052248.131729692
	
	I0807 17:37:28.152856   29086 fix.go:216] guest clock: 1723052248.131729692
	I0807 17:37:28.152864   29086 fix.go:229] Guest: 2024-08-07 17:37:28.131729692 +0000 UTC Remote: 2024-08-07 17:37:28.042238133 +0000 UTC m=+28.410003448 (delta=89.491559ms)
	I0807 17:37:28.152902   29086 fix.go:200] guest clock delta is within tolerance: 89.491559ms
	I0807 17:37:28.152908   29086 start.go:83] releasing machines lock for "addons-533488", held for 28.418669896s
	I0807 17:37:28.152933   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:28.153174   29086 main.go:141] libmachine: (addons-533488) Calling .GetIP
	I0807 17:37:28.156036   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.156421   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:28.156443   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.156609   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:28.157130   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:28.157296   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:28.157404   29086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 17:37:28.157452   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:28.157583   29086 ssh_runner.go:195] Run: cat /version.json
	I0807 17:37:28.157601   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:28.160063   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.160096   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.160476   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:28.160502   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.160666   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:28.160732   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:28.160756   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:28.161006   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:28.161006   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:28.161205   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:28.161224   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:28.161379   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:28.161444   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:28.161510   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:28.264406   29086 ssh_runner.go:195] Run: systemctl --version
	I0807 17:37:28.270262   29086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 17:37:28.430922   29086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 17:37:28.436952   29086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 17:37:28.437015   29086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 17:37:28.452999   29086 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 17:37:28.453022   29086 start.go:495] detecting cgroup driver to use...
	I0807 17:37:28.453099   29086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 17:37:28.469920   29086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 17:37:28.483224   29086 docker.go:217] disabling cri-docker service (if available) ...
	I0807 17:37:28.483272   29086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 17:37:28.496976   29086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 17:37:28.510785   29086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 17:37:28.625211   29086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 17:37:28.773744   29086 docker.go:233] disabling docker service ...
	I0807 17:37:28.773820   29086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 17:37:28.788631   29086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 17:37:28.801433   29086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 17:37:28.926951   29086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 17:37:29.056703   29086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 17:37:29.071183   29086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 17:37:29.089961   29086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 17:37:29.090027   29086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 17:37:29.100297   29086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 17:37:29.100356   29086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 17:37:29.110694   29086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 17:37:29.120846   29086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 17:37:29.130834   29086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 17:37:29.141172   29086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 17:37:29.150924   29086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 17:37:29.167662   29086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 17:37:29.177828   29086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 17:37:29.186927   29086 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 17:37:29.186985   29086 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 17:37:29.200050   29086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 17:37:29.209158   29086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:37:29.319801   29086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 17:37:29.449273   29086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 17:37:29.449372   29086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 17:37:29.454238   29086 start.go:563] Will wait 60s for crictl version
	I0807 17:37:29.454307   29086 ssh_runner.go:195] Run: which crictl
	I0807 17:37:29.458040   29086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 17:37:29.505157   29086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 17:37:29.505283   29086 ssh_runner.go:195] Run: crio --version
	I0807 17:37:29.534744   29086 ssh_runner.go:195] Run: crio --version
	I0807 17:37:29.566371   29086 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 17:37:29.568018   29086 main.go:141] libmachine: (addons-533488) Calling .GetIP
	I0807 17:37:29.570839   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:29.571324   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:29.571349   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:29.571632   29086 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 17:37:29.575824   29086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 17:37:29.588880   29086 kubeadm.go:883] updating cluster {Name:addons-533488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-533488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 17:37:29.589008   29086 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 17:37:29.589072   29086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 17:37:29.622367   29086 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0807 17:37:29.622445   29086 ssh_runner.go:195] Run: which lz4
	I0807 17:37:29.626548   29086 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0807 17:37:29.630773   29086 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 17:37:29.630800   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0807 17:37:31.007017   29086 crio.go:462] duration metric: took 1.380495197s to copy over tarball
	I0807 17:37:31.007091   29086 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 17:37:33.302303   29086 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.295182018s)
	I0807 17:37:33.302329   29086 crio.go:469] duration metric: took 2.295289128s to extract the tarball
	I0807 17:37:33.302338   29086 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 17:37:33.340637   29086 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 17:37:33.384078   29086 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 17:37:33.384106   29086 cache_images.go:84] Images are preloaded, skipping loading
	I0807 17:37:33.384117   29086 kubeadm.go:934] updating node { 192.168.39.180 8443 v1.30.3 crio true true} ...
	I0807 17:37:33.384277   29086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-533488 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-533488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 17:37:33.384347   29086 ssh_runner.go:195] Run: crio config
	I0807 17:37:33.427021   29086 cni.go:84] Creating CNI manager for ""
	I0807 17:37:33.427038   29086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 17:37:33.427047   29086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 17:37:33.427067   29086 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-533488 NodeName:addons-533488 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 17:37:33.427184   29086 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-533488"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 17:37:33.427236   29086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 17:37:33.437324   29086 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 17:37:33.437406   29086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 17:37:33.447180   29086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0807 17:37:33.463925   29086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 17:37:33.480389   29086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0807 17:37:33.497089   29086 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I0807 17:37:33.500771   29086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 17:37:33.513405   29086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:37:33.630210   29086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 17:37:33.646928   29086 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488 for IP: 192.168.39.180
	I0807 17:37:33.646948   29086 certs.go:194] generating shared ca certs ...
	I0807 17:37:33.646965   29086 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:33.647100   29086 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 17:37:33.905700   29086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt ...
	I0807 17:37:33.905734   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt: {Name:mk1ad2ad2a65fcafbda9f4bddc6f6746696b846e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:33.905953   29086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key ...
	I0807 17:37:33.905974   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key: {Name:mkc9d150d26896f53fa908e2bf214b612b9bd4ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:33.906097   29086 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 17:37:34.026706   29086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt ...
	I0807 17:37:34.026735   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt: {Name:mkfb924f1854c52c4e627c277966325c1941ebb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:34.026934   29086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key ...
	I0807 17:37:34.026951   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key: {Name:mk0da1a0c4c7a1e579722a5a34978c3325634ace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:34.027065   29086 certs.go:256] generating profile certs ...
	I0807 17:37:34.027145   29086 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/client.key
	I0807 17:37:34.027175   29086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/client.crt with IP's: []
	I0807 17:37:34.125253   29086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/client.crt ...
	I0807 17:37:34.125282   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/client.crt: {Name:mkb2e7aa97eedb3dbf536e1b81759376bd3b00e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:34.125484   29086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/client.key ...
	I0807 17:37:34.125498   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/client.key: {Name:mkb38587bd5368bc02cd81a0992ec1fe72bedc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:34.125593   29086 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.key.a8452998
	I0807 17:37:34.125618   29086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.crt.a8452998 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.180]
	I0807 17:37:34.204015   29086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.crt.a8452998 ...
	I0807 17:37:34.204041   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.crt.a8452998: {Name:mke8f0892ad9c609aa746509fbb67a3aa93d2b1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:34.204231   29086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.key.a8452998 ...
	I0807 17:37:34.204251   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.key.a8452998: {Name:mk43a12b14e50816f39262c83dd98c97ecd7ad3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:34.204349   29086 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.crt.a8452998 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.crt
	I0807 17:37:34.204443   29086 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.key.a8452998 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.key
	I0807 17:37:34.204509   29086 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/proxy-client.key
	I0807 17:37:34.204532   29086 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/proxy-client.crt with IP's: []
	I0807 17:37:34.275072   29086 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/proxy-client.crt ...
	I0807 17:37:34.275098   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/proxy-client.crt: {Name:mk5eed3c5225169a76e1b7cb5325cc4edcfb3e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:34.275267   29086 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/proxy-client.key ...
	I0807 17:37:34.275281   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/proxy-client.key: {Name:mkc97db63f6946e371d068e86ff4b67c2aeb0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:34.275486   29086 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 17:37:34.275539   29086 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 17:37:34.275579   29086 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 17:37:34.275613   29086 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 17:37:34.276177   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 17:37:34.300249   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 17:37:34.325880   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 17:37:34.360221   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 17:37:34.390050   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0807 17:37:34.413997   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 17:37:34.437478   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 17:37:34.461473   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/addons-533488/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 17:37:34.484537   29086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 17:37:34.507565   29086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 17:37:34.524696   29086 ssh_runner.go:195] Run: openssl version
	I0807 17:37:34.530537   29086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 17:37:34.542220   29086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:37:34.546665   29086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:37:34.546721   29086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 17:37:34.552419   29086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 17:37:34.563757   29086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 17:37:34.567952   29086 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 17:37:34.568002   29086 kubeadm.go:392] StartCluster: {Name:addons-533488 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-533488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:37:34.568090   29086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 17:37:34.568137   29086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 17:37:34.608505   29086 cri.go:89] found id: ""
	I0807 17:37:34.608569   29086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 17:37:34.625918   29086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 17:37:34.636508   29086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 17:37:34.646217   29086 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 17:37:34.646238   29086 kubeadm.go:157] found existing configuration files:
	
	I0807 17:37:34.646296   29086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 17:37:34.655858   29086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 17:37:34.655912   29086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 17:37:34.666760   29086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 17:37:34.676290   29086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 17:37:34.676343   29086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 17:37:34.686004   29086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 17:37:34.697103   29086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 17:37:34.697162   29086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 17:37:34.706902   29086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 17:37:34.716306   29086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 17:37:34.716360   29086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 17:37:34.725789   29086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 17:37:34.787369   29086 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 17:37:34.787471   29086 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 17:37:34.919240   29086 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 17:37:34.919373   29086 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 17:37:34.919504   29086 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 17:37:35.156511   29086 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 17:37:35.264013   29086 out.go:204]   - Generating certificates and keys ...
	I0807 17:37:35.264113   29086 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 17:37:35.264247   29086 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 17:37:35.264360   29086 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 17:37:35.459811   29086 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 17:37:35.515439   29086 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 17:37:35.675682   29086 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 17:37:35.847649   29086 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 17:37:35.847794   29086 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-533488 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
	I0807 17:37:35.910378   29086 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 17:37:35.910562   29086 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-533488 localhost] and IPs [192.168.39.180 127.0.0.1 ::1]
	I0807 17:37:36.054376   29086 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 17:37:36.200460   29086 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 17:37:36.564957   29086 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 17:37:36.565183   29086 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 17:37:36.742355   29086 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 17:37:36.905482   29086 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 17:37:37.057460   29086 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 17:37:37.159929   29086 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 17:37:37.269654   29086 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 17:37:37.270365   29086 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 17:37:37.272815   29086 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 17:37:37.274841   29086 out.go:204]   - Booting up control plane ...
	I0807 17:37:37.274947   29086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 17:37:37.275027   29086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 17:37:37.275829   29086 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 17:37:37.294295   29086 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 17:37:37.295259   29086 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 17:37:37.295312   29086 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 17:37:37.422202   29086 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 17:37:37.422301   29086 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 17:37:37.923919   29086 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.339331ms
	I0807 17:37:37.924022   29086 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 17:37:42.925161   29086 kubeadm.go:310] [api-check] The API server is healthy after 5.002941318s
	I0807 17:37:42.937653   29086 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 17:37:42.949374   29086 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 17:37:42.973253   29086 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 17:37:42.973504   29086 kubeadm.go:310] [mark-control-plane] Marking the node addons-533488 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 17:37:42.987920   29086 kubeadm.go:310] [bootstrap-token] Using token: dzetgu.x6jwww6oy2x559pl
	I0807 17:37:42.989452   29086 out.go:204]   - Configuring RBAC rules ...
	I0807 17:37:42.989571   29086 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 17:37:43.000743   29086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 17:37:43.012624   29086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 17:37:43.015720   29086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 17:37:43.018717   29086 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 17:37:43.021635   29086 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 17:37:43.332138   29086 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 17:37:43.814706   29086 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 17:37:44.331014   29086 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 17:37:44.332175   29086 kubeadm.go:310] 
	I0807 17:37:44.332251   29086 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 17:37:44.332262   29086 kubeadm.go:310] 
	I0807 17:37:44.332370   29086 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 17:37:44.332398   29086 kubeadm.go:310] 
	I0807 17:37:44.332439   29086 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 17:37:44.332522   29086 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 17:37:44.332583   29086 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 17:37:44.332593   29086 kubeadm.go:310] 
	I0807 17:37:44.332664   29086 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 17:37:44.332676   29086 kubeadm.go:310] 
	I0807 17:37:44.332746   29086 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 17:37:44.332755   29086 kubeadm.go:310] 
	I0807 17:37:44.332829   29086 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 17:37:44.332925   29086 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 17:37:44.333020   29086 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 17:37:44.333033   29086 kubeadm.go:310] 
	I0807 17:37:44.333155   29086 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 17:37:44.333245   29086 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 17:37:44.333255   29086 kubeadm.go:310] 
	I0807 17:37:44.333326   29086 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dzetgu.x6jwww6oy2x559pl \
	I0807 17:37:44.333417   29086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 \
	I0807 17:37:44.333437   29086 kubeadm.go:310] 	--control-plane 
	I0807 17:37:44.333441   29086 kubeadm.go:310] 
	I0807 17:37:44.333546   29086 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 17:37:44.333555   29086 kubeadm.go:310] 
	I0807 17:37:44.333620   29086 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dzetgu.x6jwww6oy2x559pl \
	I0807 17:37:44.333725   29086 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 
	I0807 17:37:44.334161   29086 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 17:37:44.334222   29086 cni.go:84] Creating CNI manager for ""
	I0807 17:37:44.334234   29086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 17:37:44.336131   29086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0807 17:37:44.337518   29086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0807 17:37:44.349735   29086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0807 17:37:44.369095   29086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 17:37:44.369194   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:44.369217   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-533488 minikube.k8s.io/updated_at=2024_08_07T17_37_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=addons-533488 minikube.k8s.io/primary=true
	I0807 17:37:44.521547   29086 ops.go:34] apiserver oom_adj: -16
	I0807 17:37:44.521643   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:45.022167   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:45.522087   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:46.021716   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:46.522627   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:47.021909   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:47.522640   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:48.021852   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:48.522002   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:49.021831   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:49.522138   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:50.021847   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:50.522206   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:51.022296   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:51.522278   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:52.022542   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:52.522111   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:53.022322   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:53.521749   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:54.022095   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:54.522147   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:55.022255   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:55.521833   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:56.022551   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:56.522518   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:57.022549   29086 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 17:37:57.159884   29086 kubeadm.go:1113] duration metric: took 12.790749151s to wait for elevateKubeSystemPrivileges
	I0807 17:37:57.159926   29086 kubeadm.go:394] duration metric: took 22.591926954s to StartCluster
	I0807 17:37:57.159949   29086 settings.go:142] acquiring lock: {Name:mke44792daf8192c7cb4430e19df00c0686edd5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:57.160090   29086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 17:37:57.160679   29086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/kubeconfig: {Name:mk9a4ad53bf4447453626a7769211592f39f92fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:37:57.160879   29086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0807 17:37:57.160918   29086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 17:37:57.160959   29086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0807 17:37:57.161051   29086 addons.go:69] Setting yakd=true in profile "addons-533488"
	I0807 17:37:57.161060   29086 addons.go:69] Setting inspektor-gadget=true in profile "addons-533488"
	I0807 17:37:57.161083   29086 addons.go:234] Setting addon inspektor-gadget=true in "addons-533488"
	I0807 17:37:57.161098   29086 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-533488"
	I0807 17:37:57.161117   29086 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-533488"
	I0807 17:37:57.161112   29086 addons.go:69] Setting gcp-auth=true in profile "addons-533488"
	I0807 17:37:57.161124   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.161116   29086 addons.go:69] Setting storage-provisioner=true in profile "addons-533488"
	I0807 17:37:57.161141   29086 mustload.go:65] Loading cluster: addons-533488
	I0807 17:37:57.161146   29086 addons.go:69] Setting metrics-server=true in profile "addons-533488"
	I0807 17:37:57.161153   29086 config.go:182] Loaded profile config "addons-533488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 17:37:57.161162   29086 addons.go:234] Setting addon storage-provisioner=true in "addons-533488"
	I0807 17:37:57.161170   29086 addons.go:234] Setting addon metrics-server=true in "addons-533488"
	I0807 17:37:57.161197   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.161205   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.161090   29086 addons.go:234] Setting addon yakd=true in "addons-533488"
	I0807 17:37:57.161212   29086 addons.go:69] Setting helm-tiller=true in profile "addons-533488"
	I0807 17:37:57.161231   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.161234   29086 addons.go:234] Setting addon helm-tiller=true in "addons-533488"
	I0807 17:37:57.161255   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.161316   29086 config.go:182] Loaded profile config "addons-533488": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 17:37:57.161494   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.161514   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.161558   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.161576   29086 addons.go:69] Setting ingress-dns=true in profile "addons-533488"
	I0807 17:37:57.161586   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.161130   29086 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-533488"
	I0807 17:37:57.161596   29086 addons.go:234] Setting addon ingress-dns=true in "addons-533488"
	I0807 17:37:57.161598   29086 addons.go:69] Setting volcano=true in profile "addons-533488"
	I0807 17:37:57.161608   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.161613   29086 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-533488"
	I0807 17:37:57.161615   29086 addons.go:234] Setting addon volcano=true in "addons-533488"
	I0807 17:37:57.161623   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.161633   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.161643   29086 addons.go:69] Setting registry=true in profile "addons-533488"
	I0807 17:37:57.161667   29086 addons.go:234] Setting addon registry=true in "addons-533488"
	I0807 17:37:57.161588   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.161687   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.161699   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.161719   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.161587   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.161736   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.161761   29086 addons.go:69] Setting cloud-spanner=true in profile "addons-533488"
	I0807 17:37:57.161201   29086 addons.go:69] Setting ingress=true in profile "addons-533488"
	I0807 17:37:57.161781   29086 addons.go:234] Setting addon cloud-spanner=true in "addons-533488"
	I0807 17:37:57.161784   29086 addons.go:234] Setting addon ingress=true in "addons-533488"
	I0807 17:37:57.161797   29086 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-533488"
	I0807 17:37:57.161849   29086 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-533488"
	I0807 17:37:57.161886   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.161954   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.161977   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.161994   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.162000   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.161589   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.162058   29086 addons.go:69] Setting volumesnapshots=true in profile "addons-533488"
	I0807 17:37:57.162085   29086 addons.go:234] Setting addon volumesnapshots=true in "addons-533488"
	I0807 17:37:57.161634   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.162287   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.162313   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.161800   29086 addons.go:69] Setting default-storageclass=true in profile "addons-533488"
	I0807 17:37:57.162372   29086 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-533488"
	I0807 17:37:57.162129   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.162565   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.162589   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.162599   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.162646   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.162669   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.162688   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.162705   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.162760   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.162803   29086 out.go:177] * Verifying Kubernetes components...
	I0807 17:37:57.162811   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.162852   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.162887   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.163241   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.163267   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.163353   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.163406   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.164361   29086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 17:37:57.182813   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39207
	I0807 17:37:57.183038   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44499
	I0807 17:37:57.183338   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.184030   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.184112   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40639
	I0807 17:37:57.184583   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.184605   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.184927   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.184976   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40693
	I0807 17:37:57.185425   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.185445   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.185470   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.185533   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.185562   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.185768   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.185923   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.185943   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.186704   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.186899   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.188640   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.188666   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.188817   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.188853   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.196173   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I0807 17:37:57.196214   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35777
	I0807 17:37:57.196180   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.196420   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32837
	I0807 17:37:57.196510   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.196656   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.196699   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.196749   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.196811   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.197608   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.197626   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.197662   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.197690   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.197964   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.198252   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.198546   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.198581   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.198825   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.198874   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.199325   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.199352   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.199420   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.199726   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.199845   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.199858   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.200362   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.200392   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.200905   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.201484   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.201510   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.226548   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I0807 17:37:57.227007   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.228046   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.228069   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.228575   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.229150   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.229177   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.234194   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43083
	I0807 17:37:57.234635   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.235131   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.235150   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.235477   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.235684   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.236928   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0807 17:37:57.237565   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.238331   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.238365   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.240681   29086 addons.go:234] Setting addon default-storageclass=true in "addons-533488"
	I0807 17:37:57.240720   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.241070   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.241120   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.241386   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37703
	I0807 17:37:57.241538   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39473
	I0807 17:37:57.242669   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.242679   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.242694   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44945
	I0807 17:37:57.242673   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45691
	I0807 17:37:57.242908   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.243347   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.243355   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.243387   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.243433   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.243600   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.243618   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.243675   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.243691   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.243832   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.243845   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.244016   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.244031   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.244245   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.244249   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.244528   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.244586   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.245152   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.245179   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.245157   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.245337   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.245674   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.246316   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.246747   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0807 17:37:57.246951   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0807 17:37:57.247140   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.247211   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.247645   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.247663   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.247799   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.247808   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.247887   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0807 17:37:57.248338   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.248444   29086 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0807 17:37:57.248483   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.248512   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.248862   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.248910   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.248963   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.249479   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.249499   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.249847   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.249872   29086 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0807 17:37:57.249886   29086 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0807 17:37:57.249911   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.250039   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.250094   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.253911   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45803
	I0807 17:37:57.254088   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.254130   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.265343   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.265506   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.265556   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.265560   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0807 17:37:57.265625   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42121
	I0807 17:37:57.265847   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.265866   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.265892   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.265917   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.265934   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.265983   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0807 17:37:57.266067   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0807 17:37:57.266652   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.266729   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.266787   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.267029   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.267809   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.267859   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.267913   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.268441   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.268535   29086 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0807 17:37:57.269029   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.269332   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.269345   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.269994   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.270008   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.270364   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.270365   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.270626   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.270899   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.270948   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.271229   29086 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0807 17:37:57.271652   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.271668   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.271792   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.271804   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.272175   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.272256   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.272454   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.272526   29086 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0807 17:37:57.272752   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.273128   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.273795   29086 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0807 17:37:57.274678   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.275284   29086 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0807 17:37:57.275300   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0807 17:37:57.275318   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.275400   29086 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 17:37:57.275437   29086 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0807 17:37:57.276278   29086 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0807 17:37:57.277980   29086 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0807 17:37:57.278001   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0807 17:37:57.278018   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.278900   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0807 17:37:57.278986   29086 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-533488"
	I0807 17:37:57.279027   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:37:57.279052   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39565
	I0807 17:37:57.279375   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.279409   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.279432   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.279435   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.279811   29086 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 17:37:57.279824   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 17:37:57.279838   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.279891   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.279912   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.279979   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.279998   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.280328   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.280829   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.280867   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.281065   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.281252   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.281928   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.282018   29086 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0807 17:37:57.282427   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.282456   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.283109   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.283508   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.283572   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.283721   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.283863   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.284061   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.284369   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.284395   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.284431   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.284567   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.284721   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.284787   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.284841   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.285367   29086 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0807 17:37:57.286878   29086 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0807 17:37:57.288247   29086 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0807 17:37:57.288315   29086 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0807 17:37:57.288397   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.288427   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.288441   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.288646   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.288844   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.288994   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.289605   29086 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0807 17:37:57.289621   29086 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0807 17:37:57.289637   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.290776   29086 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0807 17:37:57.291981   29086 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0807 17:37:57.293154   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.293322   29086 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0807 17:37:57.293345   29086 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0807 17:37:57.293362   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.293563   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.293656   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.293727   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.293928   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.294311   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.294440   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.295838   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40865
	I0807 17:37:57.296743   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.297453   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.297474   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.297568   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I0807 17:37:57.298024   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.298448   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.298605   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.298617   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.299016   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.299029   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.299045   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.299238   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.299988   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.300012   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.300154   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.300369   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.300499   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.300621   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.301006   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.301610   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.302506   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0807 17:37:57.302739   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0807 17:37:57.302921   29086 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0807 17:37:57.303004   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.303097   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.303446   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42271
	I0807 17:37:57.303726   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.303805   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.303810   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.303819   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.303824   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.303807   29086 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0807 17:37:57.304126   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.304176   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.304183   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.304302   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.304759   29086 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0807 17:37:57.304776   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0807 17:37:57.304780   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.304791   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.304826   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.304848   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.305684   29086 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0807 17:37:57.305701   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0807 17:37:57.305717   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.308316   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.309463   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.311114   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.311337   29086 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0807 17:37:57.311753   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.312488   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.312532   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.312729   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.312754   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.312735   29086 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0807 17:37:57.312774   29086 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0807 17:37:57.312789   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.312929   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35415
	I0807 17:37:57.313122   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.313409   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45953
	I0807 17:37:57.313638   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.313685   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.313850   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.313914   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.313987   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.314113   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.314160   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.314236   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.314509   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.314779   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.315001   29086 out.go:177]   - Using image docker.io/registry:2.8.3
	I0807 17:37:57.315039   29086 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 17:37:57.315050   29086 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 17:37:57.315069   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.315602   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.315615   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.315660   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.316010   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.316199   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.316435   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.316666   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.316883   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.317097   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.317584   29086 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0807 17:37:57.318576   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.318618   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.318842   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:37:57.318864   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:37:57.319261   29086 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0807 17:37:57.319275   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0807 17:37:57.319289   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.320330   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.320364   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:37:57.320335   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.320398   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.320416   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.320656   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.320663   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0807 17:37:57.320830   29086 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0807 17:37:57.320843   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:37:57.320859   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:37:57.320868   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:37:57.321307   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:37:57.321326   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:37:57.321340   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.321346   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:37:57.321409   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	W0807 17:37:57.321423   29086 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0807 17:37:57.321657   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.321657   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.321850   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.321863   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.321926   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.321945   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.321983   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.322139   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.322293   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.322305   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.322386   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.322532   29086 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0807 17:37:57.323112   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:37:57.323150   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:37:57.323594   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.323993   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.324015   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.324054   29086 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0807 17:37:57.324070   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0807 17:37:57.324086   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.324282   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.324437   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.324540   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.324780   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.324926   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45821
	I0807 17:37:57.325300   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.325804   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.325855   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.326166   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.326285   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	W0807 17:37:57.326328   29086 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34070->192.168.39.180:22: read: connection reset by peer
	I0807 17:37:57.326352   29086 retry.go:31] will retry after 181.768689ms: ssh: handshake failed: read tcp 192.168.39.1:34070->192.168.39.180:22: read: connection reset by peer
	I0807 17:37:57.328192   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.328485   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.328664   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.328687   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.328840   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.328967   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.329057   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.329221   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.330431   29086 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0807 17:37:57.331856   29086 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0807 17:37:57.331875   29086 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0807 17:37:57.331888   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.334984   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.335330   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.335347   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.335536   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.335726   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.335916   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.336058   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	W0807 17:37:57.338571   29086 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34086->192.168.39.180:22: read: connection reset by peer
	I0807 17:37:57.338592   29086 retry.go:31] will retry after 323.995523ms: ssh: handshake failed: read tcp 192.168.39.1:34086->192.168.39.180:22: read: connection reset by peer
	I0807 17:37:57.345288   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46049
	I0807 17:37:57.345708   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:37:57.346219   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:37:57.346237   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:37:57.346549   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:37:57.346769   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:37:57.348631   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:37:57.350833   29086 out.go:177]   - Using image docker.io/busybox:stable
	I0807 17:37:57.352463   29086 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0807 17:37:57.353982   29086 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0807 17:37:57.353997   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0807 17:37:57.354014   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:37:57.356999   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.357409   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:37:57.357440   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:37:57.357684   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:37:57.357878   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:37:57.358027   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:37:57.358186   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:37:57.582815   29086 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0807 17:37:57.582840   29086 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0807 17:37:57.630394   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0807 17:37:57.632978   29086 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0807 17:37:57.633003   29086 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0807 17:37:57.671926   29086 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0807 17:37:57.671945   29086 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0807 17:37:57.699686   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 17:37:57.711588   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0807 17:37:57.783393   29086 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0807 17:37:57.783413   29086 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0807 17:37:57.784036   29086 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0807 17:37:57.784048   29086 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0807 17:37:57.786543   29086 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0807 17:37:57.786556   29086 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0807 17:37:57.800759   29086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0807 17:37:57.800778   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0807 17:37:57.827889   29086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 17:37:57.827896   29086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0807 17:37:57.864255   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0807 17:37:57.877023   29086 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0807 17:37:57.877042   29086 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0807 17:37:57.879497   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0807 17:37:57.890360   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 17:37:57.939002   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0807 17:37:57.941544   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0807 17:37:58.002185   29086 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0807 17:37:58.002210   29086 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0807 17:37:58.020460   29086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0807 17:37:58.020486   29086 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0807 17:37:58.048459   29086 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0807 17:37:58.048483   29086 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0807 17:37:58.122662   29086 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0807 17:37:58.122696   29086 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0807 17:37:58.195324   29086 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0807 17:37:58.195348   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0807 17:37:58.234010   29086 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0807 17:37:58.234037   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0807 17:37:58.381982   29086 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 17:37:58.382001   29086 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0807 17:37:58.386074   29086 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0807 17:37:58.386089   29086 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0807 17:37:58.386980   29086 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0807 17:37:58.386998   29086 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0807 17:37:58.395475   29086 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0807 17:37:58.395498   29086 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0807 17:37:58.462260   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0807 17:37:58.570013   29086 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0807 17:37:58.570034   29086 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0807 17:37:58.583264   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0807 17:37:58.640049   29086 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0807 17:37:58.640082   29086 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0807 17:37:58.650482   29086 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0807 17:37:58.650513   29086 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0807 17:37:58.654539   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0807 17:37:58.817930   29086 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0807 17:37:58.817955   29086 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0807 17:37:58.909589   29086 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0807 17:37:58.909607   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0807 17:37:58.924071   29086 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0807 17:37:58.924088   29086 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0807 17:37:59.160669   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0807 17:37:59.201871   29086 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0807 17:37:59.201896   29086 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0807 17:37:59.206803   29086 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0807 17:37:59.206821   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0807 17:37:59.619976   29086 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0807 17:37:59.619998   29086 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0807 17:37:59.671998   29086 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0807 17:37:59.672024   29086 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0807 17:37:59.779634   29086 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0807 17:37:59.779661   29086 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0807 17:37:59.966902   29086 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0807 17:37:59.966936   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0807 17:38:00.144915   29086 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0807 17:38:00.144942   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0807 17:38:00.153147   29086 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0807 17:38:00.153168   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0807 17:38:00.459819   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0807 17:38:00.463849   29086 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0807 17:38:00.463877   29086 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0807 17:38:00.603870   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0807 17:38:04.282051   29086 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0807 17:38:04.282109   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:38:04.285206   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:38:04.285642   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:38:04.285679   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:38:04.285792   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:38:04.286016   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:38:04.286181   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:38:04.286329   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:38:04.723343   29086 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0807 17:38:04.773651   29086 addons.go:234] Setting addon gcp-auth=true in "addons-533488"
	I0807 17:38:04.773719   29086 host.go:66] Checking if "addons-533488" exists ...
	I0807 17:38:04.774147   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:38:04.774201   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:38:04.790299   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
	I0807 17:38:04.790794   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:38:04.791328   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:38:04.791352   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:38:04.791709   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:38:04.792395   29086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 17:38:04.792429   29086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 17:38:04.808199   29086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0807 17:38:04.808614   29086 main.go:141] libmachine: () Calling .GetVersion
	I0807 17:38:04.809127   29086 main.go:141] libmachine: Using API Version  1
	I0807 17:38:04.809149   29086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 17:38:04.809474   29086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 17:38:04.809693   29086 main.go:141] libmachine: (addons-533488) Calling .GetState
	I0807 17:38:04.811567   29086 main.go:141] libmachine: (addons-533488) Calling .DriverName
	I0807 17:38:04.811805   29086 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0807 17:38:04.811831   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHHostname
	I0807 17:38:04.815069   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:38:04.815493   29086 main.go:141] libmachine: (addons-533488) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:25:52", ip: ""} in network mk-addons-533488: {Iface:virbr1 ExpiryTime:2024-08-07 18:37:15 +0000 UTC Type:0 Mac:52:54:00:17:25:52 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:addons-533488 Clientid:01:52:54:00:17:25:52}
	I0807 17:38:04.815519   29086 main.go:141] libmachine: (addons-533488) DBG | domain addons-533488 has defined IP address 192.168.39.180 and MAC address 52:54:00:17:25:52 in network mk-addons-533488
	I0807 17:38:04.815707   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHPort
	I0807 17:38:04.815978   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHKeyPath
	I0807 17:38:04.816223   29086 main.go:141] libmachine: (addons-533488) Calling .GetSSHUsername
	I0807 17:38:04.816404   29086 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/addons-533488/id_rsa Username:docker}
	I0807 17:38:05.996419   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.365991845s)
	I0807 17:38:05.996472   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.996477   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.296764854s)
	I0807 17:38:05.996483   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.996532   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.284922073s)
	I0807 17:38:05.996554   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.996564   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.996512   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.996600   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.996626   29086 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.168675606s)
	I0807 17:38:05.996645   29086 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0807 17:38:05.996610   29086 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.168685971s)
	I0807 17:38:05.996655   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.132375325s)
	I0807 17:38:05.996680   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.996690   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.996697   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.117179481s)
	I0807 17:38:05.996735   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.996748   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.996770   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.106391414s)
	I0807 17:38:05.996786   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.996797   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.996843   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.057811866s)
	I0807 17:38:05.996861   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.996870   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.996890   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.055311709s)
	I0807 17:38:05.996907   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.996917   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.996954   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.534669217s)
	I0807 17:38:05.996971   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.996976   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.41367877s)
	I0807 17:38:05.996994   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.997003   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.996980   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.997043   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.342481002s)
	I0807 17:38:05.997075   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.997085   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.997176   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.836476358s)
	W0807 17:38:05.997221   29086 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0807 17:38:05.997248   29086 retry.go:31] will retry after 330.237435ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0807 17:38:05.997300   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.997312   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.997322   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.997330   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.997348   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.537488532s)
	I0807 17:38:05.997366   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.997375   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.997489   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.997507   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.997539   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.997579   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.997587   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.997595   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.997602   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.997732   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.997743   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.997751   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.997769   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.997786   29086 node_ready.go:35] waiting up to 6m0s for node "addons-533488" to be "Ready" ...
	I0807 17:38:05.997806   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.997818   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.997833   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.997843   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.997969   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.997991   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.997998   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.998006   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.998013   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.998057   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.998077   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.998085   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.998095   29086 addons.go:475] Verifying addon registry=true in "addons-533488"
	I0807 17:38:05.998506   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.998517   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.998696   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.998705   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.998713   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.998720   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.998762   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.998782   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.998788   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.998795   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.998802   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.998836   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.998851   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.998866   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.998873   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.998880   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.998886   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.998920   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.998928   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.998934   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:05.998940   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:05.999083   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.999110   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.999116   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.999338   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.999357   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.999390   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.999398   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.999788   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.999817   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.999828   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.999952   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:05.999972   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:05.999978   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:05.999985   29086 addons.go:475] Verifying addon ingress=true in "addons-533488"
	I0807 17:38:06.000151   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.000160   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.000177   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:06.000184   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:06.000108   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.000248   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.000248   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.000259   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:06.000265   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:06.000268   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.000275   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.000282   29086 addons.go:475] Verifying addon metrics-server=true in "addons-533488"
	I0807 17:38:06.000342   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.000344   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.000362   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.000364   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.000369   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.000370   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.000386   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:06.000394   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:06.000124   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.000805   29086 out.go:177] * Verifying registry addon...
	I0807 17:38:06.001651   29086 out.go:177] * Verifying ingress addon...
	I0807 17:38:06.002244   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.002266   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.002279   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.002293   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.002268   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.002309   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.002552   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.002577   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.002584   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.002838   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.002849   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.004462   29086 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0807 17:38:06.005371   29086 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0807 17:38:06.005414   29086 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-533488 service yakd-dashboard -n yakd-dashboard
	
	I0807 17:38:06.016787   29086 node_ready.go:49] node "addons-533488" has status "Ready":"True"
	I0807 17:38:06.016813   29086 node_ready.go:38] duration metric: took 19.011857ms for node "addons-533488" to be "Ready" ...
	I0807 17:38:06.016825   29086 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 17:38:06.025748   29086 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0807 17:38:06.025777   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:06.038506   29086 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0807 17:38:06.038526   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:06.050081   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:06.050104   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:06.050437   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.050437   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.050466   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	W0807 17:38:06.050559   29086 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0807 17:38:06.062519   29086 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-brzz5" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.067713   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:06.067740   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:06.068070   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.068137   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.068159   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.177346   29086 pod_ready.go:92] pod "coredns-7db6d8ff4d-brzz5" in "kube-system" namespace has status "Ready":"True"
	I0807 17:38:06.177384   29086 pod_ready.go:81] duration metric: took 114.835104ms for pod "coredns-7db6d8ff4d-brzz5" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.177397   29086 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s6vqw" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.267385   29086 pod_ready.go:92] pod "coredns-7db6d8ff4d-s6vqw" in "kube-system" namespace has status "Ready":"True"
	I0807 17:38:06.267413   29086 pod_ready.go:81] duration metric: took 90.007397ms for pod "coredns-7db6d8ff4d-s6vqw" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.267426   29086 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-533488" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.295334   29086 pod_ready.go:92] pod "etcd-addons-533488" in "kube-system" namespace has status "Ready":"True"
	I0807 17:38:06.295358   29086 pod_ready.go:81] duration metric: took 27.923523ms for pod "etcd-addons-533488" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.295367   29086 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-533488" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.311077   29086 pod_ready.go:92] pod "kube-apiserver-addons-533488" in "kube-system" namespace has status "Ready":"True"
	I0807 17:38:06.311106   29086 pod_ready.go:81] duration metric: took 15.733388ms for pod "kube-apiserver-addons-533488" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.311120   29086 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-533488" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.328550   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0807 17:38:06.405463   29086 pod_ready.go:92] pod "kube-controller-manager-addons-533488" in "kube-system" namespace has status "Ready":"True"
	I0807 17:38:06.405492   29086 pod_ready.go:81] duration metric: took 94.362313ms for pod "kube-controller-manager-addons-533488" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.405509   29086 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d687t" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.500442   29086 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-533488" context rescaled to 1 replicas
	I0807 17:38:06.512874   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:06.517549   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:06.808319   29086 pod_ready.go:92] pod "kube-proxy-d687t" in "kube-system" namespace has status "Ready":"True"
	I0807 17:38:06.808341   29086 pod_ready.go:81] duration metric: took 402.824636ms for pod "kube-proxy-d687t" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.808350   29086 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-533488" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:06.845471   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.24155019s)
	I0807 17:38:06.845523   29086 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.033696338s)
	I0807 17:38:06.845522   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:06.845641   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:06.845981   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.846050   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.846066   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.846077   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:06.846089   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:06.846308   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:06.846327   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:06.846332   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:06.846344   29086 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-533488"
	I0807 17:38:06.847594   29086 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0807 17:38:06.848420   29086 out.go:177] * Verifying csi-hostpath-driver addon...
	I0807 17:38:06.850216   29086 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0807 17:38:06.850956   29086 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0807 17:38:06.851139   29086 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0807 17:38:06.851171   29086 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0807 17:38:06.873461   29086 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0807 17:38:06.873491   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:06.909507   29086 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0807 17:38:06.909533   29086 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0807 17:38:06.933712   29086 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0807 17:38:06.933730   29086 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0807 17:38:06.988067   29086 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0807 17:38:07.011722   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:07.013514   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:07.201149   29086 pod_ready.go:92] pod "kube-scheduler-addons-533488" in "kube-system" namespace has status "Ready":"True"
	I0807 17:38:07.201169   29086 pod_ready.go:81] duration metric: took 392.8126ms for pod "kube-scheduler-addons-533488" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:07.201179   29086 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace to be "Ready" ...
	I0807 17:38:07.357399   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:07.510630   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:07.513181   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:07.873753   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:08.010285   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:08.014158   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:08.362546   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:08.509988   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:08.510143   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:08.859821   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:09.013505   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:09.013885   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:09.214394   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:09.364411   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:09.509467   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:09.510650   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:09.747021   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.758915584s)
	I0807 17:38:09.747074   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:09.747092   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:09.747210   29086 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.418610447s)
	I0807 17:38:09.747246   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:09.747260   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:09.747376   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:09.747395   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:09.747406   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:09.747413   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:09.747528   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:09.747556   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:09.747581   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:09.747602   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:09.747638   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:09.747645   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:09.747746   29086 main.go:141] libmachine: Making call to close driver server
	I0807 17:38:09.747759   29086 main.go:141] libmachine: (addons-533488) Calling .Close
	I0807 17:38:09.748006   29086 main.go:141] libmachine: (addons-533488) DBG | Closing plugin on server side
	I0807 17:38:09.748031   29086 main.go:141] libmachine: Successfully made call to close driver server
	I0807 17:38:09.748047   29086 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 17:38:09.749833   29086 addons.go:475] Verifying addon gcp-auth=true in "addons-533488"
	I0807 17:38:09.751854   29086 out.go:177] * Verifying gcp-auth addon...
	I0807 17:38:09.753574   29086 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0807 17:38:09.758125   29086 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0807 17:38:09.758140   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:09.863431   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:10.010412   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:10.012299   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:10.258836   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:10.356615   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:10.511331   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:10.511661   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:10.760585   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:10.857203   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:11.010107   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:11.010389   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:11.257712   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:11.357644   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:11.510087   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:11.510922   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:11.706923   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:11.757647   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:11.857166   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:12.009646   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:12.010193   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:12.257739   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:12.357205   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:12.516024   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:12.524161   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:12.757016   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:12.857275   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:13.013873   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:13.015350   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:13.257011   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:13.357330   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:13.511470   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:13.511534   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:13.993597   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:13.996144   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:13.998330   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:14.009312   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:14.012567   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:14.257037   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:14.357768   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:14.509203   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:14.512033   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:14.843956   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:14.857292   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:15.008747   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:15.011295   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:15.257002   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:15.357837   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:15.509468   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:15.510209   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:15.758082   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:15.858075   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:16.011195   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:16.011371   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:16.207869   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:16.257021   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:16.358834   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:16.509541   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:16.509683   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:16.756969   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:16.856622   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:17.010340   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:17.010567   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:17.257217   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:17.355760   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:17.511318   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:17.511331   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:17.759844   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:17.856702   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:18.010889   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:18.011032   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:18.258033   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:18.360195   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:18.509536   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:18.510139   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:18.707836   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:18.757904   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:18.856898   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:19.012655   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:19.019164   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:19.257386   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:19.357682   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:19.510513   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:19.510517   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:19.757192   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:19.857020   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:20.009318   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:20.012058   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:20.257053   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:20.357894   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:20.509191   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:20.510826   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:20.757906   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:20.856653   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:21.010726   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:21.011279   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:21.206952   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:21.257164   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:21.357109   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:21.511113   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:21.511390   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:21.756698   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:21.857367   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:22.111108   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:22.113076   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:22.258012   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:22.357244   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:22.510482   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:22.510824   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:22.757355   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:22.860669   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:23.008721   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:23.009500   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:23.207898   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:23.257293   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:23.355929   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:23.510850   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:23.510954   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:23.756997   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:23.861061   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:24.009716   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:24.011743   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:24.259858   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:24.357368   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:24.511435   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:24.511557   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:24.758864   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:24.856120   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:25.009687   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:25.009900   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:25.262306   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:25.356395   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:25.509263   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:25.509996   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:25.707251   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:25.757810   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:25.857283   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:26.009970   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:26.012136   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:26.260932   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:26.361715   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:26.509935   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:26.511987   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:26.757435   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:26.856779   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:27.011314   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:27.012470   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:27.268044   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:27.363742   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:27.514095   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:27.514288   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:27.709547   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:27.757530   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:27.857702   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:28.009585   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:28.010109   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:28.257946   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:28.359519   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:28.509645   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:28.510502   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:28.757528   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:28.857035   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:29.009380   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:29.010099   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:29.257760   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:29.356302   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:29.509743   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:29.510065   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:29.757970   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:29.856176   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:30.008994   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:30.012413   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:30.237845   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:30.258712   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:30.356802   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:30.510115   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:30.510416   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:30.756850   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:30.857075   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:31.012893   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:31.013320   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:31.256967   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:31.356445   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:31.509871   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:31.511492   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:31.757969   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:31.859835   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:32.009922   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:32.009919   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:32.257448   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:32.357499   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:32.509715   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:32.509919   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:32.708463   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:32.757754   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:32.857308   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:33.010112   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:33.010194   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:33.257536   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:33.356895   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:33.508768   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:33.511366   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:33.758363   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:33.856725   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:34.011156   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:34.011333   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:34.258885   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:34.359758   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:34.511568   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:34.511917   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:34.756843   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:34.864549   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:35.010147   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:35.010495   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:35.208564   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:35.257174   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:35.355680   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:35.509635   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:35.510097   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:35.756567   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:35.856566   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:36.009447   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:36.010848   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:36.257613   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:36.356486   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:36.510048   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:36.510620   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:36.757548   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:36.856649   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:37.009843   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:37.009986   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:37.258855   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:37.359541   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:37.512846   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:37.515236   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:37.708014   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:37.757195   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:37.855623   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:38.010777   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:38.011067   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:38.257335   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:38.357267   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:38.509120   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:38.509215   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:38.757212   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:38.860747   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:39.010941   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:39.011182   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:39.259797   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:39.357557   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:39.510633   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:39.511660   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:39.757452   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:39.856815   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:40.010285   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:40.011543   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:40.207183   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:40.257138   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:40.358201   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:40.577675   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:40.578702   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:40.803436   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:40.857306   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:41.011660   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:41.012980   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:41.257233   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:41.357550   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:41.510424   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:41.510454   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:41.757649   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:41.858292   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:42.010505   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:42.010531   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:42.257614   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:42.363455   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:42.631064   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:42.631435   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:42.707181   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:42.758283   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:42.856606   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:43.009381   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:43.009606   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:43.257562   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:43.356819   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:43.509191   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:43.509198   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0807 17:38:43.757394   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:43.858994   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:44.010268   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:44.010654   29086 kapi.go:107] duration metric: took 38.006192746s to wait for kubernetes.io/minikube-addons=registry ...
	I0807 17:38:44.257794   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:44.356573   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:44.509669   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:44.757824   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:44.857669   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:45.010060   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:45.208495   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:45.261263   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:45.356356   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:45.510489   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:45.757466   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:45.856868   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:46.010309   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:46.257358   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:46.548703   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:46.553420   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:46.756752   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:46.856668   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:47.009462   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:47.258918   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:47.358307   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:47.510231   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:47.707777   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:47.757894   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:47.860933   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:48.010258   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:48.257456   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:48.361910   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:48.510055   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:48.765793   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:48.870790   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:49.011222   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:49.257072   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:49.357361   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:49.510154   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:49.759398   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:49.857452   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:50.009867   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:50.207788   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:50.258553   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:50.356419   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:50.510260   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:50.757749   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:50.856580   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:51.010023   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:51.256971   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:51.359336   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:51.509818   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:51.762844   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:51.859989   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:52.010798   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:52.256759   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:52.357347   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:52.510368   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:52.798961   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:52.802138   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:52.856770   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:53.009644   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:53.257818   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:53.357315   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:53.509450   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:53.759172   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:53.856166   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:54.010424   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:54.258470   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:54.356814   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:54.510604   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:54.757992   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:54.856901   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:55.011107   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:55.206643   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:55.257441   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:55.356450   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:55.509467   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:56.167227   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:56.168442   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:56.168463   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:56.259616   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:56.357165   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:56.510147   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:56.757058   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:56.856905   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:57.010619   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:57.208380   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:57.258987   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:57.357140   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:57.509583   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:57.757350   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:57.856550   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:58.009612   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:58.257805   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:58.356951   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:58.509926   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:58.757158   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:58.855878   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:59.011777   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:59.258776   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:59.356831   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:38:59.509988   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:38:59.707996   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:38:59.757101   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:38:59.860879   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:00.009991   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:00.257450   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:00.356836   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:00.509806   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:00.758207   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:00.862945   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:01.010378   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:01.258313   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:01.366526   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:01.510976   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:01.708189   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:01.757603   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:01.864651   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:02.013513   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:02.257602   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:02.357408   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:02.509655   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:02.757393   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:02.861923   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:03.010057   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:03.257879   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:03.357438   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:03.510954   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:03.756937   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:03.857129   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:04.010271   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:04.208439   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:04.257922   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:04.357776   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:04.510805   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:05.098390   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:05.098685   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:05.099813   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:05.258565   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:05.360777   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:05.510559   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:05.757541   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:05.859390   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:06.011012   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:06.257609   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:06.374433   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:06.511060   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:06.708966   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:06.759021   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:06.856768   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:07.010127   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:07.257977   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:07.356905   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:07.510220   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:07.758201   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:07.857017   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:08.010707   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:08.259832   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:08.356443   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:08.511552   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:08.714369   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:08.760358   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:08.861253   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:09.009152   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:09.259319   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:09.356262   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:09.509672   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:09.756853   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:09.856559   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:10.012560   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:10.257396   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:10.360089   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:10.509625   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:10.757908   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:10.859492   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:11.010009   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:11.208093   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:11.257099   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:11.357803   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:11.511772   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:11.759705   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:11.856654   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:12.010047   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:12.257667   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:12.357226   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:12.509667   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:12.758138   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:12.857956   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:13.219302   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:13.222083   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:13.263683   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:13.357090   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:13.519444   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:13.758093   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:13.866939   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:14.014503   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:14.258553   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:14.357325   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:14.510533   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:14.757575   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:14.856679   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:15.010198   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:15.259508   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:15.360758   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:15.510990   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:15.711612   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:15.772484   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:15.857109   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:16.013089   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:16.258013   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:16.357138   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:16.510883   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:16.758582   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:16.861015   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:17.010298   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:17.257652   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:17.360013   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:17.509981   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:17.756834   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:17.857975   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:18.010539   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:18.479662   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:18.481847   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:18.482787   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:18.509303   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:18.757398   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:18.855961   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:19.009918   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:19.258237   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:19.356126   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:19.512270   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:19.759400   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:19.856876   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:20.009715   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:20.256739   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:20.356715   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:20.510575   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:20.708414   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:20.757455   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:20.856070   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:21.009513   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:21.259414   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:21.358657   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:21.510233   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:21.757150   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:21.856845   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:22.010833   29086 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0807 17:39:22.258290   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:22.356824   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:22.510316   29086 kapi.go:107] duration metric: took 1m16.504942984s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0807 17:39:22.762217   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:22.857111   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:23.209796   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:23.258084   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:23.356928   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:23.757078   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:23.857083   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:24.257495   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:24.356720   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:24.756911   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:24.856508   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:25.257598   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:25.356287   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:25.713001   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:25.756867   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0807 17:39:25.857637   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:26.257985   29086 kapi.go:107] duration metric: took 1m16.504408751s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0807 17:39:26.259730   29086 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-533488 cluster.
	I0807 17:39:26.260925   29086 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0807 17:39:26.262061   29086 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0807 17:39:26.357292   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:26.856034   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:27.357364   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:27.857380   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:28.207079   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:28.357344   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:29.082429   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:29.357264   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:29.857257   29086 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0807 17:39:30.213700   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:30.359072   29086 kapi.go:107] duration metric: took 1m23.508109455s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0807 17:39:30.361023   29086 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, metrics-server, storage-provisioner, helm-tiller, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0807 17:39:30.362558   29086 addons.go:510] duration metric: took 1m33.201595301s for enable addons: enabled=[ingress-dns nvidia-device-plugin metrics-server storage-provisioner helm-tiller cloud-spanner inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0807 17:39:32.707464   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:34.707562   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:36.708667   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:39.207122   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:41.208531   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:43.707717   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:45.709486   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:48.207305   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:50.208586   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:52.210364   29086 pod_ready.go:102] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"False"
	I0807 17:39:54.211980   29086 pod_ready.go:92] pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace has status "Ready":"True"
	I0807 17:39:54.212004   29086 pod_ready.go:81] duration metric: took 1m47.010818721s for pod "metrics-server-c59844bb4-tq82q" in "kube-system" namespace to be "Ready" ...
	I0807 17:39:54.212013   29086 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7xdvc" in "kube-system" namespace to be "Ready" ...
	I0807 17:39:54.216281   29086 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-7xdvc" in "kube-system" namespace has status "Ready":"True"
	I0807 17:39:54.216301   29086 pod_ready.go:81] duration metric: took 4.280841ms for pod "nvidia-device-plugin-daemonset-7xdvc" in "kube-system" namespace to be "Ready" ...
	I0807 17:39:54.216326   29086 pod_ready.go:38] duration metric: took 1m48.199489718s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 17:39:54.216349   29086 api_server.go:52] waiting for apiserver process to appear ...
	I0807 17:39:54.216384   29086 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0807 17:39:54.216443   29086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0807 17:39:54.261565   29086 cri.go:89] found id: "507405bd1f474bb56c0969241df68bb550c529c15fe9df8f1709ecf07aa10162"
	I0807 17:39:54.261589   29086 cri.go:89] found id: ""
	I0807 17:39:54.261599   29086 logs.go:276] 1 containers: [507405bd1f474bb56c0969241df68bb550c529c15fe9df8f1709ecf07aa10162]
	I0807 17:39:54.261656   29086 ssh_runner.go:195] Run: which crictl
	I0807 17:39:54.265910   29086 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0807 17:39:54.265974   29086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0807 17:39:54.305913   29086 cri.go:89] found id: "5149620cc13749dc34198a033be89b0463f819aaa175b650f81e0d8655818231"
	I0807 17:39:54.305939   29086 cri.go:89] found id: ""
	I0807 17:39:54.305948   29086 logs.go:276] 1 containers: [5149620cc13749dc34198a033be89b0463f819aaa175b650f81e0d8655818231]
	I0807 17:39:54.306009   29086 ssh_runner.go:195] Run: which crictl
	I0807 17:39:54.310018   29086 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0807 17:39:54.310071   29086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0807 17:39:54.359999   29086 cri.go:89] found id: "f1f70a5b6a7bd09da5a2c01a1d9d3c7b71848bc75b8377d47e2c1ed5b8299263"
	I0807 17:39:54.360028   29086 cri.go:89] found id: ""
	I0807 17:39:54.360039   29086 logs.go:276] 1 containers: [f1f70a5b6a7bd09da5a2c01a1d9d3c7b71848bc75b8377d47e2c1ed5b8299263]
	I0807 17:39:54.360106   29086 ssh_runner.go:195] Run: which crictl
	I0807 17:39:54.364380   29086 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0807 17:39:54.364443   29086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0807 17:39:54.401497   29086 cri.go:89] found id: "6e861f2e5e27fb937da023ac22e2772323e0f6d7577a3f615d215870f66b6e4f"
	I0807 17:39:54.401518   29086 cri.go:89] found id: ""
	I0807 17:39:54.401526   29086 logs.go:276] 1 containers: [6e861f2e5e27fb937da023ac22e2772323e0f6d7577a3f615d215870f66b6e4f]
	I0807 17:39:54.401568   29086 ssh_runner.go:195] Run: which crictl
	I0807 17:39:54.405742   29086 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0807 17:39:54.405791   29086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0807 17:39:54.445263   29086 cri.go:89] found id: "f64277264b4abde694037ac51dd86a01f874641022121a58330e7f7061e614e2"
	I0807 17:39:54.445280   29086 cri.go:89] found id: ""
	I0807 17:39:54.445287   29086 logs.go:276] 1 containers: [f64277264b4abde694037ac51dd86a01f874641022121a58330e7f7061e614e2]
	I0807 17:39:54.445328   29086 ssh_runner.go:195] Run: which crictl
	I0807 17:39:54.449365   29086 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0807 17:39:54.449413   29086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0807 17:39:54.492086   29086 cri.go:89] found id: "a22be6013db36e876083d12d7c640f4a7d50da13682e0523d84ce825d700a338"
	I0807 17:39:54.492108   29086 cri.go:89] found id: ""
	I0807 17:39:54.492116   29086 logs.go:276] 1 containers: [a22be6013db36e876083d12d7c640f4a7d50da13682e0523d84ce825d700a338]
	I0807 17:39:54.492171   29086 ssh_runner.go:195] Run: which crictl
	I0807 17:39:54.496562   29086 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0807 17:39:54.496619   29086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0807 17:39:54.551052   29086 cri.go:89] found id: ""
	I0807 17:39:54.551077   29086 logs.go:276] 0 containers: []
	W0807 17:39:54.551087   29086 logs.go:278] No container was found matching "kindnet"
	I0807 17:39:54.551095   29086 logs.go:123] Gathering logs for kubelet ...
	I0807 17:39:54.551106   29086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0807 17:39:54.602060   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:03 addons-533488 kubelet[1269]: W0807 17:38:03.279525    1269 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-533488" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.602292   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:03 addons-533488 kubelet[1269]: E0807 17:38:03.279651    1269 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-533488" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.602438   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:03 addons-533488 kubelet[1269]: W0807 17:38:03.279759    1269 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-533488" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.602596   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:03 addons-533488 kubelet[1269]: E0807 17:38:03.279786    1269 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-533488" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.603874   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:03 addons-533488 kubelet[1269]: W0807 17:38:03.614477    1269 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-533488" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.604030   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:03 addons-533488 kubelet[1269]: E0807 17:38:03.614531    1269 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-533488" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.608659   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:05 addons-533488 kubelet[1269]: W0807 17:38:05.911019    1269 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-533488" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.608857   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:05 addons-533488 kubelet[1269]: E0807 17:38:05.911069    1269 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-533488" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.609055   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:05 addons-533488 kubelet[1269]: W0807 17:38:05.914496    1269 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-533488" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.609216   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:05 addons-533488 kubelet[1269]: E0807 17:38:05.914538    1269 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-533488" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.616410   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:07 addons-533488 kubelet[1269]: W0807 17:38:07.804238    1269 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-533488" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-533488' and this object
	W0807 17:39:54.616565   29086 logs.go:138] Found kubelet problem: Aug 07 17:38:07 addons-533488 kubelet[1269]: E0807 17:38:07.804416    1269 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-533488" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-533488' and this object
	I0807 17:39:54.640492   29086 logs.go:123] Gathering logs for etcd [5149620cc13749dc34198a033be89b0463f819aaa175b650f81e0d8655818231] ...
	I0807 17:39:54.640523   29086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5149620cc13749dc34198a033be89b0463f819aaa175b650f81e0d8655818231"
	I0807 17:39:54.705422   29086 logs.go:123] Gathering logs for kube-scheduler [6e861f2e5e27fb937da023ac22e2772323e0f6d7577a3f615d215870f66b6e4f] ...
	I0807 17:39:54.705453   29086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e861f2e5e27fb937da023ac22e2772323e0f6d7577a3f615d215870f66b6e4f"
	I0807 17:39:54.756288   29086 logs.go:123] Gathering logs for kube-proxy [f64277264b4abde694037ac51dd86a01f874641022121a58330e7f7061e614e2] ...
	I0807 17:39:54.756321   29086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f64277264b4abde694037ac51dd86a01f874641022121a58330e7f7061e614e2"
	I0807 17:39:54.794271   29086 logs.go:123] Gathering logs for CRI-O ...
	I0807 17:39:54.794299   29086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-533488 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.884076994s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 image ls: (2.341166042s)
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-965692" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 node stop m02 -v=7 --alsologtostderr
E0807 18:34:14.922136   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.47707222s)

                                                
                                                
-- stdout --
	* Stopping node "ha-198246-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:33:41.488127   48662 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:33:41.488293   48662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:33:41.488302   48662 out.go:304] Setting ErrFile to fd 2...
	I0807 18:33:41.488307   48662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:33:41.488475   48662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:33:41.488727   48662 mustload.go:65] Loading cluster: ha-198246
	I0807 18:33:41.489068   48662 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:33:41.489089   48662 stop.go:39] StopHost: ha-198246-m02
	I0807 18:33:41.489465   48662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:33:41.489507   48662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:33:41.506259   48662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I0807 18:33:41.506693   48662 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:33:41.507343   48662 main.go:141] libmachine: Using API Version  1
	I0807 18:33:41.507369   48662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:33:41.507709   48662 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:33:41.510381   48662 out.go:177] * Stopping node "ha-198246-m02"  ...
	I0807 18:33:41.511950   48662 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0807 18:33:41.511996   48662 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:33:41.512275   48662 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0807 18:33:41.512315   48662 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:33:41.515803   48662 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:33:41.516352   48662 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:33:41.516374   48662 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:33:41.516614   48662 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:33:41.516808   48662 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:33:41.516956   48662 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:33:41.517104   48662 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:33:41.600398   48662 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0807 18:33:41.655537   48662 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0807 18:33:41.711929   48662 main.go:141] libmachine: Stopping "ha-198246-m02"...
	I0807 18:33:41.711951   48662 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:33:41.713370   48662 main.go:141] libmachine: (ha-198246-m02) Calling .Stop
	I0807 18:33:41.716804   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 0/120
	I0807 18:33:42.718860   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 1/120
	I0807 18:33:43.720141   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 2/120
	I0807 18:33:44.721341   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 3/120
	I0807 18:33:45.722530   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 4/120
	I0807 18:33:46.724369   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 5/120
	I0807 18:33:47.726701   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 6/120
	I0807 18:33:48.728065   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 7/120
	I0807 18:33:49.729522   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 8/120
	I0807 18:33:50.731093   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 9/120
	I0807 18:33:51.733307   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 10/120
	I0807 18:33:52.734793   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 11/120
	I0807 18:33:53.736162   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 12/120
	I0807 18:33:54.737355   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 13/120
	I0807 18:33:55.738664   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 14/120
	I0807 18:33:56.740530   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 15/120
	I0807 18:33:57.741776   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 16/120
	I0807 18:33:58.743980   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 17/120
	I0807 18:33:59.745231   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 18/120
	I0807 18:34:00.746730   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 19/120
	I0807 18:34:01.749024   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 20/120
	I0807 18:34:02.750954   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 21/120
	I0807 18:34:03.752427   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 22/120
	I0807 18:34:04.754591   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 23/120
	I0807 18:34:05.757009   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 24/120
	I0807 18:34:06.758453   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 25/120
	I0807 18:34:07.760588   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 26/120
	I0807 18:34:08.762726   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 27/120
	I0807 18:34:09.764494   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 28/120
	I0807 18:34:10.766793   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 29/120
	I0807 18:34:11.769026   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 30/120
	I0807 18:34:12.770797   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 31/120
	I0807 18:34:13.772327   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 32/120
	I0807 18:34:14.773640   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 33/120
	I0807 18:34:15.775486   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 34/120
	I0807 18:34:16.777546   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 35/120
	I0807 18:34:17.778884   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 36/120
	I0807 18:34:18.780426   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 37/120
	I0807 18:34:19.781872   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 38/120
	I0807 18:34:20.783273   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 39/120
	I0807 18:34:21.785298   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 40/120
	I0807 18:34:22.787358   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 41/120
	I0807 18:34:23.788847   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 42/120
	I0807 18:34:24.790669   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 43/120
	I0807 18:34:25.792742   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 44/120
	I0807 18:34:26.794712   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 45/120
	I0807 18:34:27.796166   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 46/120
	I0807 18:34:28.797613   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 47/120
	I0807 18:34:29.799899   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 48/120
	I0807 18:34:30.801246   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 49/120
	I0807 18:34:31.802989   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 50/120
	I0807 18:34:32.804423   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 51/120
	I0807 18:34:33.805801   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 52/120
	I0807 18:34:34.807297   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 53/120
	I0807 18:34:35.808742   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 54/120
	I0807 18:34:36.810617   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 55/120
	I0807 18:34:37.812319   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 56/120
	I0807 18:34:38.813823   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 57/120
	I0807 18:34:39.816347   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 58/120
	I0807 18:34:40.818491   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 59/120
	I0807 18:34:41.820935   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 60/120
	I0807 18:34:42.822395   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 61/120
	I0807 18:34:43.823758   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 62/120
	I0807 18:34:44.825232   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 63/120
	I0807 18:34:45.826718   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 64/120
	I0807 18:34:46.828528   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 65/120
	I0807 18:34:47.830645   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 66/120
	I0807 18:34:48.832455   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 67/120
	I0807 18:34:49.834543   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 68/120
	I0807 18:34:50.835872   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 69/120
	I0807 18:34:51.837670   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 70/120
	I0807 18:34:52.839365   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 71/120
	I0807 18:34:53.840720   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 72/120
	I0807 18:34:54.842939   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 73/120
	I0807 18:34:55.844261   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 74/120
	I0807 18:34:56.846279   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 75/120
	I0807 18:34:57.847512   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 76/120
	I0807 18:34:58.848853   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 77/120
	I0807 18:34:59.850257   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 78/120
	I0807 18:35:00.851914   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 79/120
	I0807 18:35:01.854197   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 80/120
	I0807 18:35:02.855821   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 81/120
	I0807 18:35:03.857790   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 82/120
	I0807 18:35:04.859616   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 83/120
	I0807 18:35:05.861899   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 84/120
	I0807 18:35:06.863589   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 85/120
	I0807 18:35:07.864939   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 86/120
	I0807 18:35:08.866587   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 87/120
	I0807 18:35:09.868049   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 88/120
	I0807 18:35:10.869627   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 89/120
	I0807 18:35:11.871552   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 90/120
	I0807 18:35:12.873807   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 91/120
	I0807 18:35:13.875172   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 92/120
	I0807 18:35:14.876908   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 93/120
	I0807 18:35:15.878266   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 94/120
	I0807 18:35:16.880487   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 95/120
	I0807 18:35:17.881870   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 96/120
	I0807 18:35:18.884009   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 97/120
	I0807 18:35:19.885328   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 98/120
	I0807 18:35:20.886781   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 99/120
	I0807 18:35:21.888894   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 100/120
	I0807 18:35:22.890259   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 101/120
	I0807 18:35:23.891712   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 102/120
	I0807 18:35:24.893210   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 103/120
	I0807 18:35:25.894468   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 104/120
	I0807 18:35:26.896567   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 105/120
	I0807 18:35:27.897851   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 106/120
	I0807 18:35:28.899369   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 107/120
	I0807 18:35:29.900808   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 108/120
	I0807 18:35:30.902036   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 109/120
	I0807 18:35:31.904457   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 110/120
	I0807 18:35:32.905956   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 111/120
	I0807 18:35:33.907230   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 112/120
	I0807 18:35:34.908859   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 113/120
	I0807 18:35:35.910248   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 114/120
	I0807 18:35:36.912560   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 115/120
	I0807 18:35:37.914757   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 116/120
	I0807 18:35:38.916151   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 117/120
	I0807 18:35:39.917545   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 118/120
	I0807 18:35:40.919430   48662 main.go:141] libmachine: (ha-198246-m02) Waiting for machine to stop 119/120
	I0807 18:35:41.920880   48662 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0807 18:35:41.921041   48662 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-198246 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 3 (19.163089481s)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-198246-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:35:41.962973   49096 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:35:41.963131   49096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:35:41.963212   49096 out.go:304] Setting ErrFile to fd 2...
	I0807 18:35:41.963228   49096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:35:41.963516   49096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:35:41.963749   49096 out.go:298] Setting JSON to false
	I0807 18:35:41.963777   49096 mustload.go:65] Loading cluster: ha-198246
	I0807 18:35:41.963871   49096 notify.go:220] Checking for updates...
	I0807 18:35:41.964341   49096 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:35:41.964362   49096 status.go:255] checking status of ha-198246 ...
	I0807 18:35:41.964920   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:35:41.965015   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:35:41.983762   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34677
	I0807 18:35:41.984245   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:35:41.984821   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:35:41.984852   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:35:41.985202   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:35:41.985395   49096 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:35:41.986984   49096 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:35:41.987001   49096 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:35:41.987305   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:35:41.987349   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:35:42.001674   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0807 18:35:42.002067   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:35:42.002610   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:35:42.002645   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:35:42.002970   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:35:42.003168   49096 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:35:42.006065   49096 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:35:42.006568   49096 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:35:42.006600   49096 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:35:42.006790   49096 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:35:42.007188   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:35:42.007235   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:35:42.022832   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38517
	I0807 18:35:42.023221   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:35:42.023758   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:35:42.023782   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:35:42.024148   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:35:42.024352   49096 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:35:42.024568   49096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:35:42.024599   49096 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:35:42.027646   49096 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:35:42.028062   49096 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:35:42.028106   49096 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:35:42.028301   49096 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:35:42.028464   49096 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:35:42.028604   49096 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:35:42.028700   49096 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:35:42.113693   49096 ssh_runner.go:195] Run: systemctl --version
	I0807 18:35:42.121037   49096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:35:42.141105   49096 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:35:42.141134   49096 api_server.go:166] Checking apiserver status ...
	I0807 18:35:42.141169   49096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:35:42.158172   49096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0807 18:35:42.168904   49096 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:35:42.168970   49096 ssh_runner.go:195] Run: ls
	I0807 18:35:42.173940   49096 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:35:42.180605   49096 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:35:42.180631   49096 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:35:42.180641   49096 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:35:42.180656   49096 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:35:42.180942   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:35:42.180980   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:35:42.195950   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43675
	I0807 18:35:42.196348   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:35:42.196790   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:35:42.196812   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:35:42.197099   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:35:42.197290   49096 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:35:42.198700   49096 status.go:330] ha-198246-m02 host status = "Running" (err=<nil>)
	I0807 18:35:42.198714   49096 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:35:42.199009   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:35:42.199040   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:35:42.213417   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42009
	I0807 18:35:42.213855   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:35:42.214327   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:35:42.214344   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:35:42.214662   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:35:42.214889   49096 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:35:42.217607   49096 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:35:42.218030   49096 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:35:42.218056   49096 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:35:42.218177   49096 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:35:42.218596   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:35:42.218640   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:35:42.232822   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0807 18:35:42.233210   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:35:42.233646   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:35:42.233670   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:35:42.234032   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:35:42.234236   49096 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:35:42.234450   49096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:35:42.234472   49096 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:35:42.237381   49096 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:35:42.237875   49096 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:35:42.237898   49096 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:35:42.238012   49096 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:35:42.238186   49096 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:35:42.238331   49096 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:35:42.238442   49096 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	W0807 18:36:00.708469   49096 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	W0807 18:36:00.708577   49096 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	E0807 18:36:00.708597   49096 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:00.708608   49096 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0807 18:36:00.708657   49096 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:00.708669   49096 status.go:255] checking status of ha-198246-m03 ...
	I0807 18:36:00.708995   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:00.709079   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:00.724115   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33069
	I0807 18:36:00.724547   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:00.725047   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:36:00.725068   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:00.725379   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:00.725560   49096 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:36:00.727288   49096 status.go:330] ha-198246-m03 host status = "Running" (err=<nil>)
	I0807 18:36:00.727302   49096 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:00.727595   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:00.727636   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:00.743542   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I0807 18:36:00.744066   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:00.744640   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:36:00.744660   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:00.745097   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:00.745348   49096 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:36:00.748850   49096 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:00.749268   49096 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:00.749289   49096 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:00.749473   49096 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:00.749864   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:00.749938   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:00.766508   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40075
	I0807 18:36:00.766895   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:00.767370   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:36:00.767395   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:00.767687   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:00.767934   49096 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:36:00.768128   49096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:00.768157   49096 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:36:00.770885   49096 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:00.771261   49096 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:00.771297   49096 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:00.771437   49096 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:36:00.771589   49096 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:36:00.771776   49096 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:36:00.772010   49096 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:36:00.862315   49096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:00.880148   49096 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:00.880174   49096 api_server.go:166] Checking apiserver status ...
	I0807 18:36:00.880221   49096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:00.895258   49096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0807 18:36:00.905460   49096 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:00.905519   49096 ssh_runner.go:195] Run: ls
	I0807 18:36:00.910411   49096 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:00.915518   49096 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:00.915541   49096 status.go:422] ha-198246-m03 apiserver status = Running (err=<nil>)
	I0807 18:36:00.915548   49096 status.go:257] ha-198246-m03 status: &{Name:ha-198246-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:00.915564   49096 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:36:00.915900   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:00.915934   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:00.930775   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I0807 18:36:00.931302   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:00.931715   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:36:00.931735   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:00.932262   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:00.932485   49096 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:36:00.934070   49096 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:36:00.934086   49096 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:00.934481   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:00.934527   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:00.949391   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0807 18:36:00.949809   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:00.950269   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:36:00.950294   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:00.950604   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:00.950784   49096 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:36:00.953712   49096 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:00.954168   49096 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:00.954204   49096 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:00.954300   49096 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:00.954594   49096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:00.954642   49096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:00.971056   49096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41817
	I0807 18:36:00.971504   49096 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:00.971970   49096 main.go:141] libmachine: Using API Version  1
	I0807 18:36:00.971994   49096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:00.972342   49096 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:00.972557   49096 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:36:00.972746   49096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:00.972765   49096 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:36:00.975672   49096 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:00.976379   49096 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:00.976446   49096 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:00.976574   49096 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:36:00.976745   49096 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:36:00.976889   49096 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:36:00.977020   49096 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:36:01.066082   49096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:01.083561   49096 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198246 -n ha-198246
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-198246 logs -n 25: (1.53216351s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4028937378/001/cp-test_ha-198246-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246:/home/docker/cp-test_ha-198246-m03_ha-198246.txt                       |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246 sudo cat                                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246.txt                                 |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m02:/home/docker/cp-test_ha-198246-m03_ha-198246-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m02 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04:/home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m04 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp testdata/cp-test.txt                                                | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4028937378/001/cp-test_ha-198246-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246:/home/docker/cp-test_ha-198246-m04_ha-198246.txt                       |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246 sudo cat                                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246.txt                                 |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m02:/home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m02 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03:/home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m03 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-198246 node stop m02 -v=7                                                     | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:27:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:27:21.721727   44266 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:27:21.721967   44266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:27:21.721975   44266 out.go:304] Setting ErrFile to fd 2...
	I0807 18:27:21.721979   44266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:27:21.722152   44266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:27:21.722687   44266 out.go:298] Setting JSON to false
	I0807 18:27:21.723512   44266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7788,"bootTime":1723047454,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 18:27:21.723565   44266 start.go:139] virtualization: kvm guest
	I0807 18:27:21.725729   44266 out.go:177] * [ha-198246] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 18:27:21.727183   44266 notify.go:220] Checking for updates...
	I0807 18:27:21.727193   44266 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:27:21.728548   44266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:27:21.729974   44266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:27:21.731326   44266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:27:21.732576   44266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 18:27:21.733798   44266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:27:21.735342   44266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:27:21.769737   44266 out.go:177] * Using the kvm2 driver based on user configuration
	I0807 18:27:21.771127   44266 start.go:297] selected driver: kvm2
	I0807 18:27:21.771144   44266 start.go:901] validating driver "kvm2" against <nil>
	I0807 18:27:21.771156   44266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:27:21.771870   44266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:27:21.771942   44266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 18:27:21.786733   44266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 18:27:21.786777   44266 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 18:27:21.786970   44266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:27:21.787023   44266 cni.go:84] Creating CNI manager for ""
	I0807 18:27:21.787034   44266 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0807 18:27:21.787041   44266 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 18:27:21.787097   44266 start.go:340] cluster config:
	{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0807 18:27:21.787200   44266 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:27:21.789527   44266 out.go:177] * Starting "ha-198246" primary control-plane node in "ha-198246" cluster
	I0807 18:27:21.790581   44266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:27:21.790607   44266 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 18:27:21.790615   44266 cache.go:56] Caching tarball of preloaded images
	I0807 18:27:21.790695   44266 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 18:27:21.790708   44266 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 18:27:21.790995   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:27:21.791015   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json: {Name:mk9ea4fdb45a0ad19fddd77d9e86e860b1888943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:21.791157   44266 start.go:360] acquireMachinesLock for ha-198246: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:27:21.791197   44266 start.go:364] duration metric: took 17.005µs to acquireMachinesLock for "ha-198246"
	I0807 18:27:21.791219   44266 start.go:93] Provisioning new machine with config: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:27:21.791271   44266 start.go:125] createHost starting for "" (driver="kvm2")
	I0807 18:27:21.792742   44266 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:27:21.792862   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:27:21.792923   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:27:21.806899   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38255
	I0807 18:27:21.807336   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:27:21.807888   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:27:21.807907   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:27:21.808260   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:27:21.808450   44266 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:27:21.808588   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:21.808718   44266 start.go:159] libmachine.API.Create for "ha-198246" (driver="kvm2")
	I0807 18:27:21.808749   44266 client.go:168] LocalClient.Create starting
	I0807 18:27:21.808783   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem
	I0807 18:27:21.808815   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:27:21.808831   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:27:21.808893   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem
	I0807 18:27:21.808911   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:27:21.808924   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:27:21.808938   44266 main.go:141] libmachine: Running pre-create checks...
	I0807 18:27:21.808951   44266 main.go:141] libmachine: (ha-198246) Calling .PreCreateCheck
	I0807 18:27:21.809303   44266 main.go:141] libmachine: (ha-198246) Calling .GetConfigRaw
	I0807 18:27:21.809632   44266 main.go:141] libmachine: Creating machine...
	I0807 18:27:21.809644   44266 main.go:141] libmachine: (ha-198246) Calling .Create
	I0807 18:27:21.809775   44266 main.go:141] libmachine: (ha-198246) Creating KVM machine...
	I0807 18:27:21.810961   44266 main.go:141] libmachine: (ha-198246) DBG | found existing default KVM network
	I0807 18:27:21.811595   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:21.811462   44289 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0807 18:27:21.811613   44266 main.go:141] libmachine: (ha-198246) DBG | created network xml: 
	I0807 18:27:21.811622   44266 main.go:141] libmachine: (ha-198246) DBG | <network>
	I0807 18:27:21.811630   44266 main.go:141] libmachine: (ha-198246) DBG |   <name>mk-ha-198246</name>
	I0807 18:27:21.811643   44266 main.go:141] libmachine: (ha-198246) DBG |   <dns enable='no'/>
	I0807 18:27:21.811649   44266 main.go:141] libmachine: (ha-198246) DBG |   
	I0807 18:27:21.811659   44266 main.go:141] libmachine: (ha-198246) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0807 18:27:21.811669   44266 main.go:141] libmachine: (ha-198246) DBG |     <dhcp>
	I0807 18:27:21.811682   44266 main.go:141] libmachine: (ha-198246) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0807 18:27:21.811690   44266 main.go:141] libmachine: (ha-198246) DBG |     </dhcp>
	I0807 18:27:21.811695   44266 main.go:141] libmachine: (ha-198246) DBG |   </ip>
	I0807 18:27:21.811700   44266 main.go:141] libmachine: (ha-198246) DBG |   
	I0807 18:27:21.811723   44266 main.go:141] libmachine: (ha-198246) DBG | </network>
	I0807 18:27:21.811744   44266 main.go:141] libmachine: (ha-198246) DBG | 
	I0807 18:27:21.816727   44266 main.go:141] libmachine: (ha-198246) DBG | trying to create private KVM network mk-ha-198246 192.168.39.0/24...
	I0807 18:27:21.878767   44266 main.go:141] libmachine: (ha-198246) Setting up store path in /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246 ...
	I0807 18:27:21.878803   44266 main.go:141] libmachine: (ha-198246) Building disk image from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 18:27:21.878813   44266 main.go:141] libmachine: (ha-198246) DBG | private KVM network mk-ha-198246 192.168.39.0/24 created
	I0807 18:27:21.878832   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:21.878720   44289 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:27:21.878874   44266 main.go:141] libmachine: (ha-198246) Downloading /home/jenkins/minikube-integration/19389-20864/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:27:22.116138   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:22.116028   44289 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa...
	I0807 18:27:22.201603   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:22.201499   44289 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/ha-198246.rawdisk...
	I0807 18:27:22.201635   44266 main.go:141] libmachine: (ha-198246) DBG | Writing magic tar header
	I0807 18:27:22.201649   44266 main.go:141] libmachine: (ha-198246) DBG | Writing SSH key tar header
	I0807 18:27:22.201665   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:22.201611   44289 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246 ...
	I0807 18:27:22.201729   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246
	I0807 18:27:22.201753   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246 (perms=drwx------)
	I0807 18:27:22.201760   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines
	I0807 18:27:22.201769   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:27:22.201775   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864
	I0807 18:27:22.201784   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0807 18:27:22.201812   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins
	I0807 18:27:22.201832   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home
	I0807 18:27:22.201841   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines (perms=drwxr-xr-x)
	I0807 18:27:22.201848   44266 main.go:141] libmachine: (ha-198246) DBG | Skipping /home - not owner
	I0807 18:27:22.201885   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube (perms=drwxr-xr-x)
	I0807 18:27:22.201909   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864 (perms=drwxrwxr-x)
	I0807 18:27:22.201940   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0807 18:27:22.201958   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0807 18:27:22.201972   44266 main.go:141] libmachine: (ha-198246) Creating domain...
	I0807 18:27:22.202738   44266 main.go:141] libmachine: (ha-198246) define libvirt domain using xml: 
	I0807 18:27:22.202751   44266 main.go:141] libmachine: (ha-198246) <domain type='kvm'>
	I0807 18:27:22.202774   44266 main.go:141] libmachine: (ha-198246)   <name>ha-198246</name>
	I0807 18:27:22.202794   44266 main.go:141] libmachine: (ha-198246)   <memory unit='MiB'>2200</memory>
	I0807 18:27:22.202803   44266 main.go:141] libmachine: (ha-198246)   <vcpu>2</vcpu>
	I0807 18:27:22.202808   44266 main.go:141] libmachine: (ha-198246)   <features>
	I0807 18:27:22.202813   44266 main.go:141] libmachine: (ha-198246)     <acpi/>
	I0807 18:27:22.202817   44266 main.go:141] libmachine: (ha-198246)     <apic/>
	I0807 18:27:22.202822   44266 main.go:141] libmachine: (ha-198246)     <pae/>
	I0807 18:27:22.202827   44266 main.go:141] libmachine: (ha-198246)     
	I0807 18:27:22.202831   44266 main.go:141] libmachine: (ha-198246)   </features>
	I0807 18:27:22.202835   44266 main.go:141] libmachine: (ha-198246)   <cpu mode='host-passthrough'>
	I0807 18:27:22.202840   44266 main.go:141] libmachine: (ha-198246)   
	I0807 18:27:22.202844   44266 main.go:141] libmachine: (ha-198246)   </cpu>
	I0807 18:27:22.202848   44266 main.go:141] libmachine: (ha-198246)   <os>
	I0807 18:27:22.202852   44266 main.go:141] libmachine: (ha-198246)     <type>hvm</type>
	I0807 18:27:22.202857   44266 main.go:141] libmachine: (ha-198246)     <boot dev='cdrom'/>
	I0807 18:27:22.202864   44266 main.go:141] libmachine: (ha-198246)     <boot dev='hd'/>
	I0807 18:27:22.202878   44266 main.go:141] libmachine: (ha-198246)     <bootmenu enable='no'/>
	I0807 18:27:22.202882   44266 main.go:141] libmachine: (ha-198246)   </os>
	I0807 18:27:22.202898   44266 main.go:141] libmachine: (ha-198246)   <devices>
	I0807 18:27:22.202906   44266 main.go:141] libmachine: (ha-198246)     <disk type='file' device='cdrom'>
	I0807 18:27:22.202934   44266 main.go:141] libmachine: (ha-198246)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/boot2docker.iso'/>
	I0807 18:27:22.202951   44266 main.go:141] libmachine: (ha-198246)       <target dev='hdc' bus='scsi'/>
	I0807 18:27:22.202961   44266 main.go:141] libmachine: (ha-198246)       <readonly/>
	I0807 18:27:22.202972   44266 main.go:141] libmachine: (ha-198246)     </disk>
	I0807 18:27:22.202982   44266 main.go:141] libmachine: (ha-198246)     <disk type='file' device='disk'>
	I0807 18:27:22.202994   44266 main.go:141] libmachine: (ha-198246)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0807 18:27:22.203006   44266 main.go:141] libmachine: (ha-198246)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/ha-198246.rawdisk'/>
	I0807 18:27:22.203017   44266 main.go:141] libmachine: (ha-198246)       <target dev='hda' bus='virtio'/>
	I0807 18:27:22.203025   44266 main.go:141] libmachine: (ha-198246)     </disk>
	I0807 18:27:22.203039   44266 main.go:141] libmachine: (ha-198246)     <interface type='network'>
	I0807 18:27:22.203047   44266 main.go:141] libmachine: (ha-198246)       <source network='mk-ha-198246'/>
	I0807 18:27:22.203055   44266 main.go:141] libmachine: (ha-198246)       <model type='virtio'/>
	I0807 18:27:22.203066   44266 main.go:141] libmachine: (ha-198246)     </interface>
	I0807 18:27:22.203074   44266 main.go:141] libmachine: (ha-198246)     <interface type='network'>
	I0807 18:27:22.203086   44266 main.go:141] libmachine: (ha-198246)       <source network='default'/>
	I0807 18:27:22.203094   44266 main.go:141] libmachine: (ha-198246)       <model type='virtio'/>
	I0807 18:27:22.203103   44266 main.go:141] libmachine: (ha-198246)     </interface>
	I0807 18:27:22.203110   44266 main.go:141] libmachine: (ha-198246)     <serial type='pty'>
	I0807 18:27:22.203121   44266 main.go:141] libmachine: (ha-198246)       <target port='0'/>
	I0807 18:27:22.203127   44266 main.go:141] libmachine: (ha-198246)     </serial>
	I0807 18:27:22.203154   44266 main.go:141] libmachine: (ha-198246)     <console type='pty'>
	I0807 18:27:22.203178   44266 main.go:141] libmachine: (ha-198246)       <target type='serial' port='0'/>
	I0807 18:27:22.203188   44266 main.go:141] libmachine: (ha-198246)     </console>
	I0807 18:27:22.203200   44266 main.go:141] libmachine: (ha-198246)     <rng model='virtio'>
	I0807 18:27:22.203214   44266 main.go:141] libmachine: (ha-198246)       <backend model='random'>/dev/random</backend>
	I0807 18:27:22.203229   44266 main.go:141] libmachine: (ha-198246)     </rng>
	I0807 18:27:22.203239   44266 main.go:141] libmachine: (ha-198246)     
	I0807 18:27:22.203243   44266 main.go:141] libmachine: (ha-198246)     
	I0807 18:27:22.203251   44266 main.go:141] libmachine: (ha-198246)   </devices>
	I0807 18:27:22.203258   44266 main.go:141] libmachine: (ha-198246) </domain>
	I0807 18:27:22.203271   44266 main.go:141] libmachine: (ha-198246) 
	I0807 18:27:22.207680   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:90:2f:e2 in network default
	I0807 18:27:22.208187   44266 main.go:141] libmachine: (ha-198246) Ensuring networks are active...
	I0807 18:27:22.208224   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:22.208779   44266 main.go:141] libmachine: (ha-198246) Ensuring network default is active
	I0807 18:27:22.209008   44266 main.go:141] libmachine: (ha-198246) Ensuring network mk-ha-198246 is active
	I0807 18:27:22.209409   44266 main.go:141] libmachine: (ha-198246) Getting domain xml...
	I0807 18:27:22.209962   44266 main.go:141] libmachine: (ha-198246) Creating domain...
	I0807 18:27:23.404405   44266 main.go:141] libmachine: (ha-198246) Waiting to get IP...
	I0807 18:27:23.405206   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:23.405600   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:23.405641   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:23.405567   44289 retry.go:31] will retry after 306.958712ms: waiting for machine to come up
	I0807 18:27:23.713982   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:23.714499   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:23.714526   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:23.714462   44289 retry.go:31] will retry after 299.119708ms: waiting for machine to come up
	I0807 18:27:24.014947   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:24.015426   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:24.015446   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:24.015393   44289 retry.go:31] will retry after 384.564278ms: waiting for machine to come up
	I0807 18:27:24.402079   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:24.402483   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:24.402507   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:24.402453   44289 retry.go:31] will retry after 547.85343ms: waiting for machine to come up
	I0807 18:27:24.952336   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:24.952783   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:24.952809   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:24.952724   44289 retry.go:31] will retry after 591.886125ms: waiting for machine to come up
	I0807 18:27:25.546536   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:25.546960   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:25.546987   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:25.546919   44289 retry.go:31] will retry after 637.639818ms: waiting for machine to come up
	I0807 18:27:26.185754   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:26.186206   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:26.186253   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:26.186171   44289 retry.go:31] will retry after 1.07415852s: waiting for machine to come up
	I0807 18:27:27.261894   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:27.262328   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:27.262357   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:27.262273   44289 retry.go:31] will retry after 1.388616006s: waiting for machine to come up
	I0807 18:27:28.652877   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:28.653287   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:28.653318   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:28.653222   44289 retry.go:31] will retry after 1.163215795s: waiting for machine to come up
	I0807 18:27:29.818449   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:29.818914   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:29.818948   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:29.818858   44289 retry.go:31] will retry after 2.029996828s: waiting for machine to come up
	I0807 18:27:31.849800   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:31.850166   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:31.850195   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:31.850108   44289 retry.go:31] will retry after 1.806326332s: waiting for machine to come up
	I0807 18:27:33.659132   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:33.659739   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:33.659768   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:33.659685   44289 retry.go:31] will retry after 3.239044606s: waiting for machine to come up
	I0807 18:27:36.900422   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:36.900792   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:36.900819   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:36.900742   44289 retry.go:31] will retry after 3.037723315s: waiting for machine to come up
	I0807 18:27:39.941930   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:39.942412   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:39.942441   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:39.942337   44289 retry.go:31] will retry after 5.1268659s: waiting for machine to come up
	I0807 18:27:45.074427   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:45.074880   44266 main.go:141] libmachine: (ha-198246) Found IP for machine: 192.168.39.196
	I0807 18:27:45.074901   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has current primary IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:45.074909   44266 main.go:141] libmachine: (ha-198246) Reserving static IP address...
	I0807 18:27:45.075244   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find host DHCP lease matching {name: "ha-198246", mac: "52:54:00:b0:88:98", ip: "192.168.39.196"} in network mk-ha-198246
	I0807 18:27:45.145259   44266 main.go:141] libmachine: (ha-198246) DBG | Getting to WaitForSSH function...
	I0807 18:27:45.145285   44266 main.go:141] libmachine: (ha-198246) Reserved static IP address: 192.168.39.196
	I0807 18:27:45.145343   44266 main.go:141] libmachine: (ha-198246) Waiting for SSH to be available...
	I0807 18:27:45.147843   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:45.148233   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246
	I0807 18:27:45.148255   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find defined IP address of network mk-ha-198246 interface with MAC address 52:54:00:b0:88:98
	I0807 18:27:45.148474   44266 main.go:141] libmachine: (ha-198246) DBG | Using SSH client type: external
	I0807 18:27:45.148506   44266 main.go:141] libmachine: (ha-198246) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa (-rw-------)
	I0807 18:27:45.148558   44266 main.go:141] libmachine: (ha-198246) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:27:45.148578   44266 main.go:141] libmachine: (ha-198246) DBG | About to run SSH command:
	I0807 18:27:45.148592   44266 main.go:141] libmachine: (ha-198246) DBG | exit 0
	I0807 18:27:45.152274   44266 main.go:141] libmachine: (ha-198246) DBG | SSH cmd err, output: exit status 255: 
	I0807 18:27:45.152292   44266 main.go:141] libmachine: (ha-198246) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0807 18:27:45.152299   44266 main.go:141] libmachine: (ha-198246) DBG | command : exit 0
	I0807 18:27:45.152304   44266 main.go:141] libmachine: (ha-198246) DBG | err     : exit status 255
	I0807 18:27:45.152312   44266 main.go:141] libmachine: (ha-198246) DBG | output  : 
	I0807 18:27:48.153047   44266 main.go:141] libmachine: (ha-198246) DBG | Getting to WaitForSSH function...
	I0807 18:27:48.155522   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.155912   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.155936   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.156105   44266 main.go:141] libmachine: (ha-198246) DBG | Using SSH client type: external
	I0807 18:27:48.156130   44266 main.go:141] libmachine: (ha-198246) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa (-rw-------)
	I0807 18:27:48.156167   44266 main.go:141] libmachine: (ha-198246) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:27:48.156191   44266 main.go:141] libmachine: (ha-198246) DBG | About to run SSH command:
	I0807 18:27:48.156225   44266 main.go:141] libmachine: (ha-198246) DBG | exit 0
	I0807 18:27:48.280381   44266 main.go:141] libmachine: (ha-198246) DBG | SSH cmd err, output: <nil>: 
	I0807 18:27:48.280692   44266 main.go:141] libmachine: (ha-198246) KVM machine creation complete!
	I0807 18:27:48.281058   44266 main.go:141] libmachine: (ha-198246) Calling .GetConfigRaw
	I0807 18:27:48.281656   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:48.281875   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:48.282036   44266 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 18:27:48.282050   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:27:48.283345   44266 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 18:27:48.283363   44266 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 18:27:48.283372   44266 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 18:27:48.283379   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.286023   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.286450   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.286469   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.286618   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.286773   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.286910   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.287021   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.287206   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:48.287379   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:48.287389   44266 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 18:27:48.387621   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:27:48.387645   44266 main.go:141] libmachine: Detecting the provisioner...
	I0807 18:27:48.387655   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.390612   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.391010   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.391041   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.391226   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.391498   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.391674   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.391801   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.392003   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:48.392181   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:48.392195   44266 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 18:27:48.492889   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 18:27:48.492963   44266 main.go:141] libmachine: found compatible host: buildroot
	I0807 18:27:48.492969   44266 main.go:141] libmachine: Provisioning with buildroot...
	I0807 18:27:48.492976   44266 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:27:48.493236   44266 buildroot.go:166] provisioning hostname "ha-198246"
	I0807 18:27:48.493263   44266 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:27:48.493468   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.496265   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.496578   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.496602   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.496742   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.496924   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.497076   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.497274   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.497500   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:48.497677   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:48.497689   44266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198246 && echo "ha-198246" | sudo tee /etc/hostname
	I0807 18:27:48.615801   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246
	
	I0807 18:27:48.615855   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.618925   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.619286   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.619315   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.619478   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.619662   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.619808   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.619965   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.620141   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:48.620341   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:48.620359   44266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198246/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:27:48.729682   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:27:48.729740   44266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 18:27:48.729770   44266 buildroot.go:174] setting up certificates
	I0807 18:27:48.729789   44266 provision.go:84] configureAuth start
	I0807 18:27:48.729808   44266 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:27:48.730094   44266 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:27:48.732947   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.733289   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.733317   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.733475   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.735604   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.735911   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.735935   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.736091   44266 provision.go:143] copyHostCerts
	I0807 18:27:48.736118   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:27:48.736160   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 18:27:48.736174   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:27:48.736261   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 18:27:48.736361   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:27:48.736380   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 18:27:48.736386   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:27:48.736428   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 18:27:48.736530   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:27:48.736553   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 18:27:48.736560   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:27:48.736583   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 18:27:48.736657   44266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.ha-198246 san=[127.0.0.1 192.168.39.196 ha-198246 localhost minikube]
	I0807 18:27:48.961157   44266 provision.go:177] copyRemoteCerts
	I0807 18:27:48.961215   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:27:48.961238   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.964265   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.964661   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.964697   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.964961   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.965206   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.965427   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.965581   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:27:49.047016   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 18:27:49.047096   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:27:49.071078   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 18:27:49.071152   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0807 18:27:49.095496   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 18:27:49.095566   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 18:27:49.120006   44266 provision.go:87] duration metric: took 390.201413ms to configureAuth
	I0807 18:27:49.120032   44266 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:27:49.120250   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:27:49.120330   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.122781   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.123123   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.123148   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.123319   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.123504   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.123653   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.123754   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.123923   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:49.124077   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:49.124093   44266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 18:27:49.379427   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 18:27:49.379449   44266 main.go:141] libmachine: Checking connection to Docker...
	I0807 18:27:49.379457   44266 main.go:141] libmachine: (ha-198246) Calling .GetURL
	I0807 18:27:49.381160   44266 main.go:141] libmachine: (ha-198246) DBG | Using libvirt version 6000000
	I0807 18:27:49.383505   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.383829   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.383861   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.384032   44266 main.go:141] libmachine: Docker is up and running!
	I0807 18:27:49.384051   44266 main.go:141] libmachine: Reticulating splines...
	I0807 18:27:49.384060   44266 client.go:171] duration metric: took 27.57529956s to LocalClient.Create
	I0807 18:27:49.384091   44266 start.go:167] duration metric: took 27.575373855s to libmachine.API.Create "ha-198246"
	I0807 18:27:49.384103   44266 start.go:293] postStartSetup for "ha-198246" (driver="kvm2")
	I0807 18:27:49.384117   44266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:27:49.384137   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.384384   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:27:49.384406   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.387011   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.387377   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.387400   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.387601   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.387778   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.387917   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.388019   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:27:49.467416   44266 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:27:49.471819   44266 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:27:49.471844   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 18:27:49.471913   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 18:27:49.471996   44266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 18:27:49.472007   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 18:27:49.472100   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:27:49.482472   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:27:49.507293   44266 start.go:296] duration metric: took 123.178167ms for postStartSetup
	I0807 18:27:49.507345   44266 main.go:141] libmachine: (ha-198246) Calling .GetConfigRaw
	I0807 18:27:49.507928   44266 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:27:49.510575   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.511008   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.511039   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.511346   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:27:49.511529   44266 start.go:128] duration metric: took 27.720249653s to createHost
	I0807 18:27:49.511551   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.513835   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.514239   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.514268   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.514412   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.514597   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.514751   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.514864   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.515031   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:49.515233   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:49.515246   44266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:27:49.621198   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055269.601266320
	
	I0807 18:27:49.621228   44266 fix.go:216] guest clock: 1723055269.601266320
	I0807 18:27:49.621239   44266 fix.go:229] Guest: 2024-08-07 18:27:49.60126632 +0000 UTC Remote: 2024-08-07 18:27:49.511541014 +0000 UTC m=+27.822561678 (delta=89.725306ms)
	I0807 18:27:49.621348   44266 fix.go:200] guest clock delta is within tolerance: 89.725306ms
	I0807 18:27:49.621358   44266 start.go:83] releasing machines lock for "ha-198246", held for 27.830148378s
	I0807 18:27:49.621384   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.621648   44266 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:27:49.624076   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.624475   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.624506   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.624646   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.625094   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.625251   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.625329   44266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 18:27:49.625368   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.625433   44266 ssh_runner.go:195] Run: cat /version.json
	I0807 18:27:49.625456   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.628179   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.628428   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.628489   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.628513   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.628653   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.628845   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.628875   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.628902   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.628989   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.629163   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.629174   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:27:49.629309   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.629489   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.629653   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:27:49.707121   44266 ssh_runner.go:195] Run: systemctl --version
	I0807 18:27:49.730199   44266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 18:27:49.894353   44266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:27:49.901460   44266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:27:49.901532   44266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:27:49.918470   44266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:27:49.918496   44266 start.go:495] detecting cgroup driver to use...
	I0807 18:27:49.918550   44266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:27:49.935346   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:27:49.950325   44266 docker.go:217] disabling cri-docker service (if available) ...
	I0807 18:27:49.950373   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 18:27:49.965026   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 18:27:49.979393   44266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 18:27:50.101391   44266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 18:27:50.266571   44266 docker.go:233] disabling docker service ...
	I0807 18:27:50.266633   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 18:27:50.280886   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 18:27:50.293687   44266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 18:27:50.411890   44266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 18:27:50.531647   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 18:27:50.545917   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:27:50.565503   44266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 18:27:50.565564   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.577648   44266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 18:27:50.577727   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.589717   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.601142   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.612276   44266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:27:50.623423   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.634380   44266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.652648   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.664994   44266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:27:50.675990   44266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 18:27:50.676071   44266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 18:27:50.690790   44266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:27:50.702376   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:27:50.836087   44266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 18:27:50.977071   44266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 18:27:50.977144   44266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 18:27:50.982362   44266 start.go:563] Will wait 60s for crictl version
	I0807 18:27:50.982434   44266 ssh_runner.go:195] Run: which crictl
	I0807 18:27:50.986273   44266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:27:51.023888   44266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 18:27:51.023993   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:27:51.051884   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:27:51.082665   44266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 18:27:51.083804   44266 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:27:51.086499   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:51.086829   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:51.086855   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:51.087080   44266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 18:27:51.091372   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:27:51.104446   44266 kubeadm.go:883] updating cluster {Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 18:27:51.104537   44266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:27:51.104583   44266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:27:51.135506   44266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0807 18:27:51.135568   44266 ssh_runner.go:195] Run: which lz4
	I0807 18:27:51.140129   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0807 18:27:51.140252   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0807 18:27:51.144801   44266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 18:27:51.144833   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0807 18:27:52.554895   44266 crio.go:462] duration metric: took 1.414692613s to copy over tarball
	I0807 18:27:52.555019   44266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 18:27:54.702005   44266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.146953106s)
	I0807 18:27:54.702032   44266 crio.go:469] duration metric: took 2.147109225s to extract the tarball
	I0807 18:27:54.702041   44266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 18:27:54.740000   44266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:27:54.786797   44266 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 18:27:54.786816   44266 cache_images.go:84] Images are preloaded, skipping loading
	I0807 18:27:54.786825   44266 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0807 18:27:54.786956   44266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:27:54.787033   44266 ssh_runner.go:195] Run: crio config
	I0807 18:27:54.830632   44266 cni.go:84] Creating CNI manager for ""
	I0807 18:27:54.830659   44266 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 18:27:54.830671   44266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 18:27:54.830691   44266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198246 NodeName:ha-198246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 18:27:54.830808   44266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-198246"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 18:27:54.830828   44266 kube-vip.go:115] generating kube-vip config ...
	I0807 18:27:54.830867   44266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:27:54.849054   44266 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:27:54.849165   44266 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:27:54.849229   44266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:27:54.859040   44266 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 18:27:54.859110   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0807 18:27:54.868475   44266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0807 18:27:54.885744   44266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:27:54.902712   44266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0807 18:27:54.919755   44266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0807 18:27:54.936740   44266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:27:54.940938   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:27:54.953525   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:27:55.078749   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:27:55.097378   44266 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246 for IP: 192.168.39.196
	I0807 18:27:55.097404   44266 certs.go:194] generating shared ca certs ...
	I0807 18:27:55.097422   44266 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.097635   44266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 18:27:55.097699   44266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 18:27:55.097714   44266 certs.go:256] generating profile certs ...
	I0807 18:27:55.097787   44266 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key
	I0807 18:27:55.097814   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt with IP's: []
	I0807 18:27:55.208693   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt ...
	I0807 18:27:55.208724   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt: {Name:mka7fa8cfb74ff61110b7cfa5be9a6c01adb62d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.208915   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key ...
	I0807 18:27:55.208929   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key: {Name:mk2f8f0495ba491dab5e08ca790f78097bcc62bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.209031   44266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.890fd0f4
	I0807 18:27:55.209049   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.890fd0f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.254]
	I0807 18:27:55.624285   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.890fd0f4 ...
	I0807 18:27:55.624314   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.890fd0f4: {Name:mkd68d7f250c70cd5fa8d28ad5bc1bbe0c86a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.624461   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.890fd0f4 ...
	I0807 18:27:55.624473   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.890fd0f4: {Name:mkff3833d02b04ce9c36a734c937e13f709f80e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.624542   44266 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.890fd0f4 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt
	I0807 18:27:55.624619   44266 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.890fd0f4 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key
	I0807 18:27:55.624669   44266 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key
	I0807 18:27:55.624697   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt with IP's: []
	I0807 18:27:55.759073   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt ...
	I0807 18:27:55.759102   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt: {Name:mkb3499dae347a7cfa9dfc4b50cfa2f9ee673ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.759241   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key ...
	I0807 18:27:55.759251   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key: {Name:mk07d2a285004089dd73e71e881ed70e932c4b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.759316   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:27:55.759333   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:27:55.759346   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:27:55.759360   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:27:55.759372   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:27:55.759384   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:27:55.759400   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:27:55.759412   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:27:55.759461   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 18:27:55.759494   44266 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 18:27:55.759504   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 18:27:55.759526   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 18:27:55.759550   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 18:27:55.759571   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 18:27:55.759608   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:27:55.759632   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:27:55.759646   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 18:27:55.759658   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 18:27:55.760218   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:27:55.787520   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:27:55.815044   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:27:55.840056   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 18:27:55.867094   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 18:27:55.897468   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 18:27:55.954340   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:27:55.987648   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 18:27:56.013314   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:27:56.039452   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 18:27:56.065754   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 18:27:56.091942   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 18:27:56.111135   44266 ssh_runner.go:195] Run: openssl version
	I0807 18:27:56.117304   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:27:56.128689   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:27:56.133805   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:27:56.133860   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:27:56.140551   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:27:56.152253   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 18:27:56.164005   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 18:27:56.169124   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 18:27:56.169182   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 18:27:56.175462   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 18:27:56.187518   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 18:27:56.201710   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 18:27:56.206624   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 18:27:56.206673   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 18:27:56.212905   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:27:56.224505   44266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:27:56.229013   44266 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:27:56.229063   44266 kubeadm.go:392] StartCluster: {Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:27:56.229135   44266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 18:27:56.229186   44266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 18:27:56.270696   44266 cri.go:89] found id: ""
	I0807 18:27:56.270773   44266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 18:27:56.281401   44266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 18:27:56.291997   44266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 18:27:56.302725   44266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 18:27:56.302745   44266 kubeadm.go:157] found existing configuration files:
	
	I0807 18:27:56.302792   44266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 18:27:56.312979   44266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 18:27:56.313046   44266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 18:27:56.323033   44266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 18:27:56.332468   44266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 18:27:56.332514   44266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 18:27:56.342420   44266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 18:27:56.351963   44266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 18:27:56.352032   44266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 18:27:56.361913   44266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 18:27:56.371591   44266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 18:27:56.371657   44266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 18:27:56.381281   44266 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 18:27:56.488327   44266 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 18:27:56.488440   44266 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 18:27:56.624126   44266 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 18:27:56.624281   44266 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 18:27:56.624494   44266 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 18:27:56.873034   44266 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 18:27:56.990825   44266 out.go:204]   - Generating certificates and keys ...
	I0807 18:27:56.990959   44266 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 18:27:56.991052   44266 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 18:27:57.066929   44266 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 18:27:57.283110   44266 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 18:27:57.486271   44266 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 18:27:57.678831   44266 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 18:27:57.750579   44266 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 18:27:57.750810   44266 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-198246 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0807 18:27:58.190149   44266 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 18:27:58.190378   44266 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-198246 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0807 18:27:58.450761   44266 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 18:27:58.618895   44266 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 18:27:58.844633   44266 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 18:27:58.844738   44266 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 18:27:58.940356   44266 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 18:27:59.154431   44266 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 18:27:59.281640   44266 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 18:27:59.360167   44266 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 18:27:59.439806   44266 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 18:27:59.440368   44266 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 18:27:59.443581   44266 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 18:27:59.445418   44266 out.go:204]   - Booting up control plane ...
	I0807 18:27:59.445508   44266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 18:27:59.446192   44266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 18:27:59.446957   44266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 18:27:59.461414   44266 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 18:27:59.462292   44266 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 18:27:59.462339   44266 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 18:27:59.600411   44266 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 18:27:59.600545   44266 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 18:28:00.099690   44266 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.352376ms
	I0807 18:28:00.099804   44266 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 18:28:05.959621   44266 kubeadm.go:310] [api-check] The API server is healthy after 5.862299271s
	I0807 18:28:05.975846   44266 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 18:28:06.014624   44266 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 18:28:06.042297   44266 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 18:28:06.042535   44266 kubeadm.go:310] [mark-control-plane] Marking the node ha-198246 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 18:28:06.057030   44266 kubeadm.go:310] [bootstrap-token] Using token: acde14.b8y6evu3gygtakpe
	I0807 18:28:06.058575   44266 out.go:204]   - Configuring RBAC rules ...
	I0807 18:28:06.058714   44266 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 18:28:06.066217   44266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 18:28:06.080020   44266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 18:28:06.087791   44266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 18:28:06.092681   44266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 18:28:06.096020   44266 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 18:28:06.374616   44266 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 18:28:06.820636   44266 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 18:28:07.370115   44266 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 18:28:07.370141   44266 kubeadm.go:310] 
	I0807 18:28:07.370203   44266 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 18:28:07.370211   44266 kubeadm.go:310] 
	I0807 18:28:07.370295   44266 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 18:28:07.370303   44266 kubeadm.go:310] 
	I0807 18:28:07.370345   44266 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 18:28:07.370425   44266 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 18:28:07.370496   44266 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 18:28:07.370506   44266 kubeadm.go:310] 
	I0807 18:28:07.370578   44266 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 18:28:07.370587   44266 kubeadm.go:310] 
	I0807 18:28:07.370652   44266 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 18:28:07.370661   44266 kubeadm.go:310] 
	I0807 18:28:07.370747   44266 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 18:28:07.370856   44266 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 18:28:07.370953   44266 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 18:28:07.370964   44266 kubeadm.go:310] 
	I0807 18:28:07.371074   44266 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 18:28:07.371188   44266 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 18:28:07.371231   44266 kubeadm.go:310] 
	I0807 18:28:07.371348   44266 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token acde14.b8y6evu3gygtakpe \
	I0807 18:28:07.371521   44266 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 \
	I0807 18:28:07.371563   44266 kubeadm.go:310] 	--control-plane 
	I0807 18:28:07.371570   44266 kubeadm.go:310] 
	I0807 18:28:07.371677   44266 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 18:28:07.371685   44266 kubeadm.go:310] 
	I0807 18:28:07.371782   44266 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token acde14.b8y6evu3gygtakpe \
	I0807 18:28:07.371920   44266 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 
	I0807 18:28:07.372287   44266 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 18:28:07.372313   44266 cni.go:84] Creating CNI manager for ""
	I0807 18:28:07.372321   44266 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 18:28:07.374314   44266 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0807 18:28:07.375747   44266 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0807 18:28:07.381422   44266 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0807 18:28:07.381440   44266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0807 18:28:07.402202   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0807 18:28:07.779600   44266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 18:28:07.779665   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:07.779684   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198246 minikube.k8s.io/updated_at=2024_08_07T18_28_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-198246 minikube.k8s.io/primary=true
	I0807 18:28:07.798602   44266 ops.go:34] apiserver oom_adj: -16
	I0807 18:28:08.016757   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:08.517348   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:09.017382   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:09.516834   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:10.017237   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:10.517025   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:11.017393   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:11.517645   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:12.017006   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:12.517242   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:13.017450   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:13.517301   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:14.017420   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:14.517520   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:15.016948   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:15.517754   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:16.016906   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:16.517576   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:17.017692   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:17.517425   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:18.017663   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:18.517834   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:19.017814   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:19.516956   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:20.017249   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:20.516839   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:20.593055   44266 kubeadm.go:1113] duration metric: took 12.81344917s to wait for elevateKubeSystemPrivileges
	I0807 18:28:20.593093   44266 kubeadm.go:394] duration metric: took 24.364034512s to StartCluster
	I0807 18:28:20.593114   44266 settings.go:142] acquiring lock: {Name:mke44792daf8192c7cb4430e19df00c0686edd5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:20.593205   44266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:28:20.593898   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/kubeconfig: {Name:mk9a4ad53bf4447453626a7769211592f39f92fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:20.594131   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0807 18:28:20.594146   44266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 18:28:20.594202   44266 addons.go:69] Setting storage-provisioner=true in profile "ha-198246"
	I0807 18:28:20.594127   44266 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:28:20.594240   44266 start.go:241] waiting for startup goroutines ...
	I0807 18:28:20.594244   44266 addons.go:234] Setting addon storage-provisioner=true in "ha-198246"
	I0807 18:28:20.594253   44266 addons.go:69] Setting default-storageclass=true in profile "ha-198246"
	I0807 18:28:20.594272   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:28:20.594281   44266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-198246"
	I0807 18:28:20.594338   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:28:20.594625   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.594626   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.594650   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.594656   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.609354   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0807 18:28:20.609414   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38699
	I0807 18:28:20.609790   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.609862   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.610266   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.610283   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.610389   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.610410   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.610618   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.610723   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.610793   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:28:20.611217   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.611247   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.612810   44266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:28:20.613017   44266 kapi.go:59] client config for ha-198246: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt", KeyFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key", CAFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 18:28:20.613524   44266 addons.go:234] Setting addon default-storageclass=true in "ha-198246"
	I0807 18:28:20.613554   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:28:20.613774   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.613789   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.613921   44266 cert_rotation.go:137] Starting client certificate rotation controller
	I0807 18:28:20.626271   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0807 18:28:20.626822   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.627365   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.627390   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.627663   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.627835   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:28:20.628446   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0807 18:28:20.628810   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.629360   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.629383   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.629588   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:28:20.629689   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.630104   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.630142   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.631750   44266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 18:28:20.633155   44266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:28:20.633177   44266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 18:28:20.633196   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:28:20.636123   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:20.636542   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:28:20.636568   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:20.636689   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:28:20.636880   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:28:20.637013   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:28:20.637127   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:28:20.646444   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38675
	I0807 18:28:20.646903   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.647381   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.647407   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.647701   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.647869   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:28:20.649389   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:28:20.649621   44266 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 18:28:20.649639   44266 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 18:28:20.649655   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:28:20.652497   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:20.652927   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:28:20.652955   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:20.653107   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:28:20.653303   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:28:20.653475   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:28:20.653622   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:28:20.708691   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0807 18:28:20.771288   44266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:28:20.803911   44266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 18:28:21.019837   44266 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0807 18:28:21.230253   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.230275   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.230346   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.230366   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.230561   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.230578   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.230586   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.230594   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.230677   44266 main.go:141] libmachine: (ha-198246) DBG | Closing plugin on server side
	I0807 18:28:21.230683   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.230695   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.230703   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.230710   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.232188   44266 main.go:141] libmachine: (ha-198246) DBG | Closing plugin on server side
	I0807 18:28:21.232193   44266 main.go:141] libmachine: (ha-198246) DBG | Closing plugin on server side
	I0807 18:28:21.232228   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.232229   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.232250   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.232250   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.232412   44266 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0807 18:28:21.232423   44266 round_trippers.go:469] Request Headers:
	I0807 18:28:21.232433   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:28:21.232442   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:28:21.244939   44266 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 18:28:21.245737   44266 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0807 18:28:21.245758   44266 round_trippers.go:469] Request Headers:
	I0807 18:28:21.245768   44266 round_trippers.go:473]     Content-Type: application/json
	I0807 18:28:21.245779   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:28:21.245784   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:28:21.248031   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:28:21.248194   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.248228   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.248571   44266 main.go:141] libmachine: (ha-198246) DBG | Closing plugin on server side
	I0807 18:28:21.248579   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.248602   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.251329   44266 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0807 18:28:21.252717   44266 addons.go:510] duration metric: took 658.567856ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0807 18:28:21.252752   44266 start.go:246] waiting for cluster config update ...
	I0807 18:28:21.252766   44266 start.go:255] writing updated cluster config ...
	I0807 18:28:21.254496   44266 out.go:177] 
	I0807 18:28:21.255798   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:28:21.255869   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:28:21.257396   44266 out.go:177] * Starting "ha-198246-m02" control-plane node in "ha-198246" cluster
	I0807 18:28:21.258544   44266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:28:21.258563   44266 cache.go:56] Caching tarball of preloaded images
	I0807 18:28:21.258651   44266 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 18:28:21.258666   44266 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 18:28:21.258740   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:28:21.258909   44266 start.go:360] acquireMachinesLock for ha-198246-m02: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:28:21.258952   44266 start.go:364] duration metric: took 24.011µs to acquireMachinesLock for "ha-198246-m02"
	I0807 18:28:21.258975   44266 start.go:93] Provisioning new machine with config: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:28:21.259059   44266 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0807 18:28:21.260585   44266 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:28:21.260664   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:21.260685   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:21.274940   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0807 18:28:21.275393   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:21.275883   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:21.275912   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:21.276238   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:21.276417   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetMachineName
	I0807 18:28:21.276559   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:21.276724   44266 start.go:159] libmachine.API.Create for "ha-198246" (driver="kvm2")
	I0807 18:28:21.276747   44266 client.go:168] LocalClient.Create starting
	I0807 18:28:21.276782   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem
	I0807 18:28:21.276821   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:28:21.276844   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:28:21.276909   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem
	I0807 18:28:21.276934   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:28:21.276948   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:28:21.276978   44266 main.go:141] libmachine: Running pre-create checks...
	I0807 18:28:21.276990   44266 main.go:141] libmachine: (ha-198246-m02) Calling .PreCreateCheck
	I0807 18:28:21.277160   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetConfigRaw
	I0807 18:28:21.277503   44266 main.go:141] libmachine: Creating machine...
	I0807 18:28:21.277517   44266 main.go:141] libmachine: (ha-198246-m02) Calling .Create
	I0807 18:28:21.277664   44266 main.go:141] libmachine: (ha-198246-m02) Creating KVM machine...
	I0807 18:28:21.278838   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found existing default KVM network
	I0807 18:28:21.278997   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found existing private KVM network mk-ha-198246
	I0807 18:28:21.279157   44266 main.go:141] libmachine: (ha-198246-m02) Setting up store path in /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02 ...
	I0807 18:28:21.279196   44266 main.go:141] libmachine: (ha-198246-m02) Building disk image from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 18:28:21.279252   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:21.279170   44685 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:28:21.279333   44266 main.go:141] libmachine: (ha-198246-m02) Downloading /home/jenkins/minikube-integration/19389-20864/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:28:21.511603   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:21.511453   44685 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa...
	I0807 18:28:21.728136   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:21.727998   44685 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/ha-198246-m02.rawdisk...
	I0807 18:28:21.728162   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Writing magic tar header
	I0807 18:28:21.728173   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Writing SSH key tar header
	I0807 18:28:21.728181   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:21.728108   44685 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02 ...
	I0807 18:28:21.728242   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02
	I0807 18:28:21.728271   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02 (perms=drwx------)
	I0807 18:28:21.728290   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines
	I0807 18:28:21.728311   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:28:21.728325   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864
	I0807 18:28:21.728339   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines (perms=drwxr-xr-x)
	I0807 18:28:21.728355   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0807 18:28:21.728367   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube (perms=drwxr-xr-x)
	I0807 18:28:21.728379   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins
	I0807 18:28:21.728394   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home
	I0807 18:28:21.728405   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Skipping /home - not owner
	I0807 18:28:21.728422   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864 (perms=drwxrwxr-x)
	I0807 18:28:21.728437   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0807 18:28:21.728467   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0807 18:28:21.728487   44266 main.go:141] libmachine: (ha-198246-m02) Creating domain...
	I0807 18:28:21.729349   44266 main.go:141] libmachine: (ha-198246-m02) define libvirt domain using xml: 
	I0807 18:28:21.729400   44266 main.go:141] libmachine: (ha-198246-m02) <domain type='kvm'>
	I0807 18:28:21.729415   44266 main.go:141] libmachine: (ha-198246-m02)   <name>ha-198246-m02</name>
	I0807 18:28:21.729428   44266 main.go:141] libmachine: (ha-198246-m02)   <memory unit='MiB'>2200</memory>
	I0807 18:28:21.729439   44266 main.go:141] libmachine: (ha-198246-m02)   <vcpu>2</vcpu>
	I0807 18:28:21.729455   44266 main.go:141] libmachine: (ha-198246-m02)   <features>
	I0807 18:28:21.729466   44266 main.go:141] libmachine: (ha-198246-m02)     <acpi/>
	I0807 18:28:21.729474   44266 main.go:141] libmachine: (ha-198246-m02)     <apic/>
	I0807 18:28:21.729486   44266 main.go:141] libmachine: (ha-198246-m02)     <pae/>
	I0807 18:28:21.729493   44266 main.go:141] libmachine: (ha-198246-m02)     
	I0807 18:28:21.729502   44266 main.go:141] libmachine: (ha-198246-m02)   </features>
	I0807 18:28:21.729510   44266 main.go:141] libmachine: (ha-198246-m02)   <cpu mode='host-passthrough'>
	I0807 18:28:21.729518   44266 main.go:141] libmachine: (ha-198246-m02)   
	I0807 18:28:21.729530   44266 main.go:141] libmachine: (ha-198246-m02)   </cpu>
	I0807 18:28:21.729541   44266 main.go:141] libmachine: (ha-198246-m02)   <os>
	I0807 18:28:21.729550   44266 main.go:141] libmachine: (ha-198246-m02)     <type>hvm</type>
	I0807 18:28:21.729563   44266 main.go:141] libmachine: (ha-198246-m02)     <boot dev='cdrom'/>
	I0807 18:28:21.729574   44266 main.go:141] libmachine: (ha-198246-m02)     <boot dev='hd'/>
	I0807 18:28:21.729587   44266 main.go:141] libmachine: (ha-198246-m02)     <bootmenu enable='no'/>
	I0807 18:28:21.729597   44266 main.go:141] libmachine: (ha-198246-m02)   </os>
	I0807 18:28:21.729627   44266 main.go:141] libmachine: (ha-198246-m02)   <devices>
	I0807 18:28:21.729665   44266 main.go:141] libmachine: (ha-198246-m02)     <disk type='file' device='cdrom'>
	I0807 18:28:21.729686   44266 main.go:141] libmachine: (ha-198246-m02)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/boot2docker.iso'/>
	I0807 18:28:21.729699   44266 main.go:141] libmachine: (ha-198246-m02)       <target dev='hdc' bus='scsi'/>
	I0807 18:28:21.729711   44266 main.go:141] libmachine: (ha-198246-m02)       <readonly/>
	I0807 18:28:21.729721   44266 main.go:141] libmachine: (ha-198246-m02)     </disk>
	I0807 18:28:21.729734   44266 main.go:141] libmachine: (ha-198246-m02)     <disk type='file' device='disk'>
	I0807 18:28:21.729748   44266 main.go:141] libmachine: (ha-198246-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0807 18:28:21.729765   44266 main.go:141] libmachine: (ha-198246-m02)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/ha-198246-m02.rawdisk'/>
	I0807 18:28:21.729773   44266 main.go:141] libmachine: (ha-198246-m02)       <target dev='hda' bus='virtio'/>
	I0807 18:28:21.729781   44266 main.go:141] libmachine: (ha-198246-m02)     </disk>
	I0807 18:28:21.729789   44266 main.go:141] libmachine: (ha-198246-m02)     <interface type='network'>
	I0807 18:28:21.729799   44266 main.go:141] libmachine: (ha-198246-m02)       <source network='mk-ha-198246'/>
	I0807 18:28:21.729807   44266 main.go:141] libmachine: (ha-198246-m02)       <model type='virtio'/>
	I0807 18:28:21.729816   44266 main.go:141] libmachine: (ha-198246-m02)     </interface>
	I0807 18:28:21.729832   44266 main.go:141] libmachine: (ha-198246-m02)     <interface type='network'>
	I0807 18:28:21.729845   44266 main.go:141] libmachine: (ha-198246-m02)       <source network='default'/>
	I0807 18:28:21.729856   44266 main.go:141] libmachine: (ha-198246-m02)       <model type='virtio'/>
	I0807 18:28:21.729868   44266 main.go:141] libmachine: (ha-198246-m02)     </interface>
	I0807 18:28:21.729875   44266 main.go:141] libmachine: (ha-198246-m02)     <serial type='pty'>
	I0807 18:28:21.729887   44266 main.go:141] libmachine: (ha-198246-m02)       <target port='0'/>
	I0807 18:28:21.729895   44266 main.go:141] libmachine: (ha-198246-m02)     </serial>
	I0807 18:28:21.729923   44266 main.go:141] libmachine: (ha-198246-m02)     <console type='pty'>
	I0807 18:28:21.729945   44266 main.go:141] libmachine: (ha-198246-m02)       <target type='serial' port='0'/>
	I0807 18:28:21.729956   44266 main.go:141] libmachine: (ha-198246-m02)     </console>
	I0807 18:28:21.729961   44266 main.go:141] libmachine: (ha-198246-m02)     <rng model='virtio'>
	I0807 18:28:21.729976   44266 main.go:141] libmachine: (ha-198246-m02)       <backend model='random'>/dev/random</backend>
	I0807 18:28:21.729987   44266 main.go:141] libmachine: (ha-198246-m02)     </rng>
	I0807 18:28:21.729995   44266 main.go:141] libmachine: (ha-198246-m02)     
	I0807 18:28:21.730002   44266 main.go:141] libmachine: (ha-198246-m02)     
	I0807 18:28:21.730010   44266 main.go:141] libmachine: (ha-198246-m02)   </devices>
	I0807 18:28:21.730016   44266 main.go:141] libmachine: (ha-198246-m02) </domain>
	I0807 18:28:21.730025   44266 main.go:141] libmachine: (ha-198246-m02) 
	I0807 18:28:21.736803   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:b2:15:7e in network default
	I0807 18:28:21.737390   44266 main.go:141] libmachine: (ha-198246-m02) Ensuring networks are active...
	I0807 18:28:21.737416   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:21.738108   44266 main.go:141] libmachine: (ha-198246-m02) Ensuring network default is active
	I0807 18:28:21.738450   44266 main.go:141] libmachine: (ha-198246-m02) Ensuring network mk-ha-198246 is active
	I0807 18:28:21.738836   44266 main.go:141] libmachine: (ha-198246-m02) Getting domain xml...
	I0807 18:28:21.739511   44266 main.go:141] libmachine: (ha-198246-m02) Creating domain...
	I0807 18:28:22.980593   44266 main.go:141] libmachine: (ha-198246-m02) Waiting to get IP...
	I0807 18:28:22.981319   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:22.981653   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:22.981678   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:22.981626   44685 retry.go:31] will retry after 277.857687ms: waiting for machine to come up
	I0807 18:28:23.261356   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:23.261928   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:23.261955   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:23.261836   44685 retry.go:31] will retry after 296.896309ms: waiting for machine to come up
	I0807 18:28:23.560474   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:23.560953   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:23.560974   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:23.560905   44685 retry.go:31] will retry after 431.200025ms: waiting for machine to come up
	I0807 18:28:23.993408   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:23.993831   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:23.993860   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:23.993783   44685 retry.go:31] will retry after 489.747622ms: waiting for machine to come up
	I0807 18:28:24.485553   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:24.486096   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:24.486118   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:24.486038   44685 retry.go:31] will retry after 595.37365ms: waiting for machine to come up
	I0807 18:28:25.082858   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:25.083273   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:25.083297   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:25.083229   44685 retry.go:31] will retry after 864.817898ms: waiting for machine to come up
	I0807 18:28:25.949301   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:25.949755   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:25.949787   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:25.949705   44685 retry.go:31] will retry after 980.056682ms: waiting for machine to come up
	I0807 18:28:26.931211   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:26.931633   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:26.931667   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:26.931574   44685 retry.go:31] will retry after 1.374312311s: waiting for machine to come up
	I0807 18:28:28.308159   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:28.308539   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:28.308588   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:28.308503   44685 retry.go:31] will retry after 1.32565444s: waiting for machine to come up
	I0807 18:28:29.635739   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:29.636210   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:29.636236   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:29.636128   44685 retry.go:31] will retry after 2.094612533s: waiting for machine to come up
	I0807 18:28:31.731860   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:31.732338   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:31.732366   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:31.732297   44685 retry.go:31] will retry after 2.384083205s: waiting for machine to come up
	I0807 18:28:34.117344   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:34.117765   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:34.117789   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:34.117726   44685 retry.go:31] will retry after 3.244651745s: waiting for machine to come up
	I0807 18:28:37.364060   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:37.364496   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:37.364524   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:37.364456   44685 retry.go:31] will retry after 3.883256435s: waiting for machine to come up
	I0807 18:28:41.249166   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.249651   44266 main.go:141] libmachine: (ha-198246-m02) Found IP for machine: 192.168.39.251
	I0807 18:28:41.249682   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has current primary IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.249838   44266 main.go:141] libmachine: (ha-198246-m02) Reserving static IP address...
	I0807 18:28:41.250128   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find host DHCP lease matching {name: "ha-198246-m02", mac: "52:54:00:c8:91:fc", ip: "192.168.39.251"} in network mk-ha-198246
	I0807 18:28:41.326471   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Getting to WaitForSSH function...
	I0807 18:28:41.326495   44266 main.go:141] libmachine: (ha-198246-m02) Reserved static IP address: 192.168.39.251
	I0807 18:28:41.326537   44266 main.go:141] libmachine: (ha-198246-m02) Waiting for SSH to be available...
	I0807 18:28:41.329224   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.329503   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.329528   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.329675   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Using SSH client type: external
	I0807 18:28:41.329699   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa (-rw-------)
	I0807 18:28:41.329726   44266 main.go:141] libmachine: (ha-198246-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:28:41.329738   44266 main.go:141] libmachine: (ha-198246-m02) DBG | About to run SSH command:
	I0807 18:28:41.329752   44266 main.go:141] libmachine: (ha-198246-m02) DBG | exit 0
	I0807 18:28:41.456688   44266 main.go:141] libmachine: (ha-198246-m02) DBG | SSH cmd err, output: <nil>: 
	I0807 18:28:41.457054   44266 main.go:141] libmachine: (ha-198246-m02) KVM machine creation complete!
	I0807 18:28:41.457342   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetConfigRaw
	I0807 18:28:41.457876   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:41.458082   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:41.458245   44266 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 18:28:41.458260   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:28:41.459550   44266 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 18:28:41.459565   44266 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 18:28:41.459572   44266 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 18:28:41.459578   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.461855   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.462198   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.462225   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.462361   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:41.462552   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.462697   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.462811   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:41.463068   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:41.463266   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:41.463277   44266 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 18:28:41.563789   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:28:41.563815   44266 main.go:141] libmachine: Detecting the provisioner...
	I0807 18:28:41.563825   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.566883   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.567241   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.567263   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.567445   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:41.567660   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.567837   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.567975   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:41.568253   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:41.568452   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:41.568470   44266 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 18:28:41.669364   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 18:28:41.669425   44266 main.go:141] libmachine: found compatible host: buildroot
	I0807 18:28:41.669432   44266 main.go:141] libmachine: Provisioning with buildroot...
	I0807 18:28:41.669440   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetMachineName
	I0807 18:28:41.669653   44266 buildroot.go:166] provisioning hostname "ha-198246-m02"
	I0807 18:28:41.669679   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetMachineName
	I0807 18:28:41.669860   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.672464   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.672770   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.672793   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.672942   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:41.673104   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.673265   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.673412   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:41.673627   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:41.673943   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:41.673966   44266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198246-m02 && echo "ha-198246-m02" | sudo tee /etc/hostname
	I0807 18:28:41.792440   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246-m02
	
	I0807 18:28:41.792466   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.795604   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.795966   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.795984   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.796230   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:41.796424   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.796595   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.796740   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:41.796885   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:41.797037   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:41.797053   44266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198246-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198246-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198246-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:28:41.906596   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:28:41.906633   44266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 18:28:41.906652   44266 buildroot.go:174] setting up certificates
	I0807 18:28:41.906662   44266 provision.go:84] configureAuth start
	I0807 18:28:41.906670   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetMachineName
	I0807 18:28:41.906995   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:28:41.909871   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.910201   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.910258   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.910405   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.912630   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.912923   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.912952   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.913098   44266 provision.go:143] copyHostCerts
	I0807 18:28:41.913133   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:28:41.913171   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 18:28:41.913181   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:28:41.913262   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 18:28:41.913348   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:28:41.913371   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 18:28:41.913380   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:28:41.913419   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 18:28:41.913479   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:28:41.913502   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 18:28:41.913510   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:28:41.913543   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 18:28:41.913607   44266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.ha-198246-m02 san=[127.0.0.1 192.168.39.251 ha-198246-m02 localhost minikube]
	I0807 18:28:42.029415   44266 provision.go:177] copyRemoteCerts
	I0807 18:28:42.029466   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:28:42.029488   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.031816   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.032108   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.032134   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.032373   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.032590   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.032761   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.032906   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:28:42.115186   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 18:28:42.115248   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 18:28:42.139771   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 18:28:42.139888   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 18:28:42.166463   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 18:28:42.166547   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:28:42.191360   44266 provision.go:87] duration metric: took 284.686105ms to configureAuth
	I0807 18:28:42.191394   44266 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:28:42.191575   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:28:42.191639   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.194385   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.194831   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.194853   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.195191   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.195376   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.195544   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.195680   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.195895   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:42.196044   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:42.196058   44266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 18:28:42.467289   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 18:28:42.467319   44266 main.go:141] libmachine: Checking connection to Docker...
	I0807 18:28:42.467328   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetURL
	I0807 18:28:42.468563   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Using libvirt version 6000000
	I0807 18:28:42.470865   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.471205   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.471243   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.471411   44266 main.go:141] libmachine: Docker is up and running!
	I0807 18:28:42.471434   44266 main.go:141] libmachine: Reticulating splines...
	I0807 18:28:42.471451   44266 client.go:171] duration metric: took 21.19468682s to LocalClient.Create
	I0807 18:28:42.471481   44266 start.go:167] duration metric: took 21.194756451s to libmachine.API.Create "ha-198246"
	I0807 18:28:42.471493   44266 start.go:293] postStartSetup for "ha-198246-m02" (driver="kvm2")
	I0807 18:28:42.471507   44266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:28:42.471534   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.471773   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:28:42.471806   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.474080   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.474413   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.474433   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.474545   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.474739   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.474895   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.475097   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:28:42.560490   44266 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:28:42.565161   44266 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:28:42.565195   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 18:28:42.565275   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 18:28:42.565387   44266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 18:28:42.565402   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 18:28:42.565531   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:28:42.576441   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:28:42.601887   44266 start.go:296] duration metric: took 130.379831ms for postStartSetup
	I0807 18:28:42.601945   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetConfigRaw
	I0807 18:28:42.602524   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:28:42.605525   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.605930   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.605957   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.606232   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:28:42.606422   44266 start.go:128] duration metric: took 21.347355066s to createHost
	I0807 18:28:42.606445   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.608659   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.609011   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.609037   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.609154   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.609339   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.609509   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.609670   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.609881   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:42.610037   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:42.610048   44266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:28:42.712453   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055322.681885034
	
	I0807 18:28:42.712476   44266 fix.go:216] guest clock: 1723055322.681885034
	I0807 18:28:42.712486   44266 fix.go:229] Guest: 2024-08-07 18:28:42.681885034 +0000 UTC Remote: 2024-08-07 18:28:42.606435256 +0000 UTC m=+80.917455918 (delta=75.449778ms)
	I0807 18:28:42.712505   44266 fix.go:200] guest clock delta is within tolerance: 75.449778ms
	I0807 18:28:42.712511   44266 start.go:83] releasing machines lock for "ha-198246-m02", held for 21.453548489s
	I0807 18:28:42.712528   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.712799   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:28:42.715436   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.715971   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.716003   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.718499   44266 out.go:177] * Found network options:
	I0807 18:28:42.719912   44266 out.go:177]   - NO_PROXY=192.168.39.196
	W0807 18:28:42.721156   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:28:42.721186   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.721776   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.721994   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.722091   44266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 18:28:42.722129   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	W0807 18:28:42.722390   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:28:42.722461   44266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 18:28:42.722484   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.724944   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.725052   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.725311   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.725352   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.725460   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.725478   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.725500   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.725609   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.725652   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.725814   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.725827   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.725956   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.725974   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:28:42.726095   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:28:42.958805   44266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:28:42.964806   44266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:28:42.964894   44266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:28:42.981388   44266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:28:42.981416   44266 start.go:495] detecting cgroup driver to use...
	I0807 18:28:42.981488   44266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:28:42.997458   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:28:43.012016   44266 docker.go:217] disabling cri-docker service (if available) ...
	I0807 18:28:43.012089   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 18:28:43.025912   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 18:28:43.039739   44266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 18:28:43.155400   44266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 18:28:43.303225   44266 docker.go:233] disabling docker service ...
	I0807 18:28:43.303286   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 18:28:43.318739   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 18:28:43.332532   44266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 18:28:43.472596   44266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 18:28:43.605966   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 18:28:43.619925   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:28:43.638588   44266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 18:28:43.638650   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.649283   44266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 18:28:43.649357   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.659951   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.670486   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.680962   44266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:28:43.691796   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.702576   44266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.720080   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.730366   44266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:28:43.739403   44266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 18:28:43.739465   44266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 18:28:43.752984   44266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:28:43.764481   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:28:43.897332   44266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 18:28:44.051283   44266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 18:28:44.051350   44266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 18:28:44.056138   44266 start.go:563] Will wait 60s for crictl version
	I0807 18:28:44.056186   44266 ssh_runner.go:195] Run: which crictl
	I0807 18:28:44.060100   44266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:28:44.107041   44266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 18:28:44.107160   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:28:44.136233   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:28:44.172438   44266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 18:28:44.174040   44266 out.go:177]   - env NO_PROXY=192.168.39.196
	I0807 18:28:44.175421   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:28:44.178934   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:44.179638   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:44.179664   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:44.179936   44266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 18:28:44.184425   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:28:44.197404   44266 mustload.go:65] Loading cluster: ha-198246
	I0807 18:28:44.197592   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:28:44.197871   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:44.197898   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:44.212129   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0807 18:28:44.212590   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:44.213046   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:44.213066   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:44.213444   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:44.213618   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:28:44.215209   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:28:44.215490   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:44.215512   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:44.229524   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0807 18:28:44.229880   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:44.230343   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:44.230365   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:44.230754   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:44.230920   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:28:44.231062   44266 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246 for IP: 192.168.39.251
	I0807 18:28:44.231075   44266 certs.go:194] generating shared ca certs ...
	I0807 18:28:44.231089   44266 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:44.231203   44266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 18:28:44.231239   44266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 18:28:44.231248   44266 certs.go:256] generating profile certs ...
	I0807 18:28:44.231307   44266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key
	I0807 18:28:44.231330   44266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.f3bca680
	I0807 18:28:44.231342   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.f3bca680 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.251 192.168.39.254]
	I0807 18:28:44.559979   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.f3bca680 ...
	I0807 18:28:44.560015   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.f3bca680: {Name:mk532d2b707d0b4ff2030a049398865e8e454aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:44.560219   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.f3bca680 ...
	I0807 18:28:44.560234   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.f3bca680: {Name:mkd4bd0dec009d42e6ef356f3ddf31b6cb75091b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:44.560311   44266 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.f3bca680 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt
	I0807 18:28:44.560448   44266 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.f3bca680 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key
	I0807 18:28:44.560582   44266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key
	I0807 18:28:44.560598   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:28:44.560612   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:28:44.560628   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:28:44.560643   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:28:44.560658   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:28:44.560672   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:28:44.560687   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:28:44.560701   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:28:44.560749   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 18:28:44.560780   44266 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 18:28:44.560791   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 18:28:44.560824   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 18:28:44.560849   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 18:28:44.560873   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 18:28:44.560916   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:28:44.560944   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 18:28:44.560960   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 18:28:44.560975   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:28:44.561016   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:28:44.564346   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:44.564706   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:28:44.564750   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:44.564946   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:28:44.565119   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:28:44.565260   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:28:44.565409   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:28:44.636542   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0807 18:28:44.641908   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0807 18:28:44.653641   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0807 18:28:44.658316   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0807 18:28:44.670858   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0807 18:28:44.676928   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0807 18:28:44.688768   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0807 18:28:44.693467   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0807 18:28:44.706551   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0807 18:28:44.711030   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0807 18:28:44.721174   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0807 18:28:44.725233   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0807 18:28:44.735693   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:28:44.760511   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:28:44.784292   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:28:44.808965   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 18:28:44.832316   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0807 18:28:44.855893   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 18:28:44.879876   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:28:44.903976   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 18:28:44.927934   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 18:28:44.951549   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 18:28:44.976102   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:28:45.000438   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0807 18:28:45.017427   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0807 18:28:45.034095   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0807 18:28:45.051357   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0807 18:28:45.068164   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0807 18:28:45.084903   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0807 18:28:45.102266   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0807 18:28:45.119661   44266 ssh_runner.go:195] Run: openssl version
	I0807 18:28:45.125624   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 18:28:45.137494   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 18:28:45.142382   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 18:28:45.142458   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 18:28:45.148370   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 18:28:45.159984   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 18:28:45.171372   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 18:28:45.176093   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 18:28:45.176163   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 18:28:45.182048   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:28:45.193770   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:28:45.205285   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:28:45.209824   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:28:45.209886   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:28:45.215494   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:28:45.226843   44266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:28:45.231043   44266 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:28:45.231100   44266 kubeadm.go:934] updating node {m02 192.168.39.251 8443 v1.30.3 crio true true} ...
	I0807 18:28:45.231200   44266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198246-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:28:45.231226   44266 kube-vip.go:115] generating kube-vip config ...
	I0807 18:28:45.231271   44266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:28:45.250153   44266 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:28:45.250214   44266 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:28:45.250259   44266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:28:45.260907   44266 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0807 18:28:45.260967   44266 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0807 18:28:45.270880   44266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0807 18:28:45.270914   44266 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0807 18:28:45.270924   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:28:45.270930   44266 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0807 18:28:45.270992   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:28:45.275789   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 18:28:45.275817   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0807 18:29:16.535217   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:29:16.535295   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:29:16.541424   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 18:29:16.541476   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0807 18:29:46.649609   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:29:46.666629   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:29:46.666743   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:29:46.671680   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 18:29:46.671715   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0807 18:29:47.065172   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0807 18:29:47.074897   44266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0807 18:29:47.091782   44266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:29:47.108598   44266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:29:47.125245   44266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:29:47.129682   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:29:47.142072   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:29:47.274574   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:29:47.291877   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:29:47.292235   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:29:47.292273   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:29:47.307242   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39989
	I0807 18:29:47.307720   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:29:47.308297   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:29:47.308318   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:29:47.308692   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:29:47.308878   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:29:47.309043   44266 start.go:317] joinCluster: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:29:47.309174   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0807 18:29:47.309196   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:29:47.312164   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:29:47.312576   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:29:47.312598   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:29:47.312741   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:29:47.312882   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:29:47.313026   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:29:47.313145   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:29:47.474155   44266 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:29:47.474212   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 78qnj6.khw60v382x7suzf2 --discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-198246-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443"
	I0807 18:30:09.531673   44266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 78qnj6.khw60v382x7suzf2 --discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-198246-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443": (22.057432801s)
	I0807 18:30:09.531712   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0807 18:30:10.063192   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198246-m02 minikube.k8s.io/updated_at=2024_08_07T18_30_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-198246 minikube.k8s.io/primary=false
	I0807 18:30:10.185705   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198246-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0807 18:30:10.289663   44266 start.go:319] duration metric: took 22.980616289s to joinCluster
	I0807 18:30:10.289758   44266 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:30:10.290021   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:30:10.291609   44266 out.go:177] * Verifying Kubernetes components...
	I0807 18:30:10.293124   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:30:10.576909   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:30:10.661898   44266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:30:10.662179   44266 kapi.go:59] client config for ha-198246: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt", KeyFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key", CAFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0807 18:30:10.662254   44266 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.196:8443
	I0807 18:30:10.662605   44266 node_ready.go:35] waiting up to 6m0s for node "ha-198246-m02" to be "Ready" ...
	I0807 18:30:10.662743   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:10.662754   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:10.662761   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:10.662767   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:10.674699   44266 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0807 18:30:11.163679   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:11.163703   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:11.163714   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:11.163720   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:11.167619   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:11.662940   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:11.662963   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:11.662969   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:11.662974   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:11.667816   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:12.163352   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:12.163385   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:12.163396   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:12.163402   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:12.172406   44266 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:30:12.662926   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:12.662951   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:12.662956   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:12.662961   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:12.667516   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:12.668218   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:13.163680   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:13.163706   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:13.163714   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:13.163719   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:13.168225   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:13.663108   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:13.663128   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:13.663136   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:13.663140   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:13.667026   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:14.163223   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:14.163246   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:14.163255   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:14.163263   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:14.167544   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:14.663519   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:14.663545   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:14.663556   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:14.663562   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:14.667595   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:15.163698   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:15.163725   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:15.163738   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:15.163746   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:15.167266   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:15.167770   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:15.662914   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:15.662941   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:15.662951   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:15.662957   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:15.666490   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:16.163391   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:16.163419   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:16.163429   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:16.163435   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:16.167182   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:16.663906   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:16.663932   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:16.663943   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:16.663948   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:16.668176   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:17.162965   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:17.163049   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:17.163068   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:17.163080   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:17.167431   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:17.168285   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:17.663405   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:17.663429   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:17.663440   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:17.663447   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:17.667918   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:18.162918   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:18.162941   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:18.162949   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:18.162953   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:18.166467   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:18.663722   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:18.663747   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:18.663757   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:18.663762   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:18.668276   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:19.162945   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:19.162966   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:19.162973   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:19.162978   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:19.166353   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:19.663155   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:19.663178   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:19.663187   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:19.663192   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:19.667298   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:19.668045   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:20.163460   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:20.163483   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:20.163490   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:20.163493   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:20.166765   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:20.662987   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:20.663010   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:20.663021   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:20.663027   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:20.666411   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:21.163219   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:21.163241   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:21.163249   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:21.163252   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:21.167248   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:21.663138   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:21.663163   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:21.663171   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:21.663177   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:21.666630   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:22.163469   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:22.163496   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:22.163507   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:22.163513   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:22.166849   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:22.167436   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:22.663333   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:22.663354   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:22.663364   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:22.663369   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:22.667068   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:23.163229   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:23.163251   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:23.163259   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:23.163263   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:23.167845   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:23.663204   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:23.663224   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:23.663232   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:23.663236   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:23.667349   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:24.163700   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:24.163721   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:24.163727   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:24.163730   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:24.166893   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:24.167762   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:24.663135   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:24.663185   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:24.663196   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:24.663200   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:24.667220   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:25.163134   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:25.163158   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:25.163167   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:25.163171   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:25.167004   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:25.663113   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:25.663133   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:25.663141   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:25.663144   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:25.666348   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:26.162882   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:26.162907   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:26.162918   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:26.162923   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:26.166232   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:26.663636   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:26.663655   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:26.663663   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:26.663668   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:26.668096   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:26.668860   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:27.162906   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:27.162932   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:27.162956   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:27.162961   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:27.166246   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:27.662814   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:27.662838   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:27.662849   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:27.662855   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:27.666937   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:28.163385   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:28.163408   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:28.163419   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:28.163425   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:28.166996   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:28.663197   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:28.663220   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:28.663227   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:28.663231   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:28.666970   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.163041   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:29.163064   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.163072   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.163077   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.166751   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.167186   44266 node_ready.go:49] node "ha-198246-m02" has status "Ready":"True"
	I0807 18:30:29.167202   44266 node_ready.go:38] duration metric: took 18.504556301s for node "ha-198246-m02" to be "Ready" ...
	I0807 18:30:29.167209   44266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:30:29.167270   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:29.167282   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.167291   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.167298   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.172316   44266 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:30:29.179632   44266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.179698   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rbnrx
	I0807 18:30:29.179705   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.179713   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.179716   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.182900   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.183633   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.183648   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.183658   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.183664   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.186729   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.187287   44266 pod_ready.go:92] pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.187309   44266 pod_ready.go:81] duration metric: took 7.655346ms for pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.187321   44266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.187382   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w6w6g
	I0807 18:30:29.187393   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.187403   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.187407   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.190210   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.190826   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.190843   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.190852   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.190860   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.193702   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.194277   44266 pod_ready.go:92] pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.194298   44266 pod_ready.go:81] duration metric: took 6.969332ms for pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.194310   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.194367   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246
	I0807 18:30:29.194377   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.194385   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.194388   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.197299   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.197836   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.197850   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.197857   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.197862   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.200079   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.200638   44266 pod_ready.go:92] pod "etcd-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.200658   44266 pod_ready.go:81] duration metric: took 6.339465ms for pod "etcd-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.200671   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.200727   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246-m02
	I0807 18:30:29.200736   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.200746   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.200754   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.202945   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.203877   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:29.203893   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.203901   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.203907   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.205928   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.206548   44266 pod_ready.go:92] pod "etcd-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.206567   44266 pod_ready.go:81] duration metric: took 5.88553ms for pod "etcd-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.206585   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.363426   44266 request.go:629] Waited for 156.781521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246
	I0807 18:30:29.363516   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246
	I0807 18:30:29.363524   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.363536   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.363544   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.367464   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.563699   44266 request.go:629] Waited for 195.39754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.563755   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.563761   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.563771   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.563776   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.568245   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:29.568903   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.568920   44266 pod_ready.go:81] duration metric: took 362.325252ms for pod "kube-apiserver-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.568929   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.764067   44266 request.go:629] Waited for 195.080715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m02
	I0807 18:30:29.764156   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m02
	I0807 18:30:29.764163   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.764175   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.764183   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.767929   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.964048   44266 request.go:629] Waited for 195.354316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:29.964123   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:29.964133   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.964143   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.964148   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.967577   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.968090   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.968117   44266 pod_ready.go:81] duration metric: took 399.182286ms for pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.968126   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.163142   44266 request.go:629] Waited for 194.953767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246
	I0807 18:30:30.163221   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246
	I0807 18:30:30.163231   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.163244   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.163253   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.166706   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:30.363808   44266 request.go:629] Waited for 196.398052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:30.363874   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:30.363885   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.363895   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.363904   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.367698   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:30.368173   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:30.368190   44266 pod_ready.go:81] duration metric: took 400.057957ms for pod "kube-controller-manager-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.368215   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.563270   44266 request.go:629] Waited for 194.991431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m02
	I0807 18:30:30.563343   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m02
	I0807 18:30:30.563350   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.563360   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.563365   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.566556   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:30.763133   44266 request.go:629] Waited for 196.018941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:30.763191   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:30.763198   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.763206   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.763217   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.766348   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:30.767082   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:30.767100   44266 pod_ready.go:81] duration metric: took 398.876067ms for pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.767118   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4l79v" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.963064   44266 request.go:629] Waited for 195.878143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l79v
	I0807 18:30:30.963131   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l79v
	I0807 18:30:30.963137   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.963144   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.963151   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.966736   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:31.163920   44266 request.go:629] Waited for 196.37962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:31.164005   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:31.164017   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.164028   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.164037   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.168411   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:31.168975   44266 pod_ready.go:92] pod "kube-proxy-4l79v" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:31.168994   44266 pod_ready.go:81] duration metric: took 401.867348ms for pod "kube-proxy-4l79v" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.169006   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5ng2" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.363592   44266 request.go:629] Waited for 194.511545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5ng2
	I0807 18:30:31.363668   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5ng2
	I0807 18:30:31.363675   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.363685   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.363691   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.368028   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:31.563121   44266 request.go:629] Waited for 194.293236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:31.563213   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:31.563223   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.563234   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.563244   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.566830   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:31.567571   44266 pod_ready.go:92] pod "kube-proxy-m5ng2" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:31.567600   44266 pod_ready.go:81] duration metric: took 398.576464ms for pod "kube-proxy-m5ng2" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.567631   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.764083   44266 request.go:629] Waited for 196.35828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246
	I0807 18:30:31.764163   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246
	I0807 18:30:31.764177   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.764191   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.764199   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.767212   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:31.963277   44266 request.go:629] Waited for 195.395503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:31.963339   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:31.963343   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.963350   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.963354   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.966678   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:31.967366   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:31.967384   44266 pod_ready.go:81] duration metric: took 399.739353ms for pod "kube-scheduler-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.967393   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:32.163518   44266 request.go:629] Waited for 196.071536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m02
	I0807 18:30:32.163576   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m02
	I0807 18:30:32.163581   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.163589   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.163593   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.167125   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:32.363259   44266 request.go:629] Waited for 195.352702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:32.363309   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:32.363314   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.363325   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.363330   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.366413   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:32.367013   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:32.367033   44266 pod_ready.go:81] duration metric: took 399.634584ms for pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:32.367043   44266 pod_ready.go:38] duration metric: took 3.199823963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:30:32.367059   44266 api_server.go:52] waiting for apiserver process to appear ...
	I0807 18:30:32.367111   44266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:30:32.385352   44266 api_server.go:72] duration metric: took 22.095548352s to wait for apiserver process to appear ...
	I0807 18:30:32.385377   44266 api_server.go:88] waiting for apiserver healthz status ...
	I0807 18:30:32.385393   44266 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0807 18:30:32.391376   44266 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0807 18:30:32.391449   44266 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0807 18:30:32.391462   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.391472   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.391483   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.392358   44266 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0807 18:30:32.392431   44266 api_server.go:141] control plane version: v1.30.3
	I0807 18:30:32.392445   44266 api_server.go:131] duration metric: took 7.062347ms to wait for apiserver health ...
	I0807 18:30:32.392452   44266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 18:30:32.563869   44266 request.go:629] Waited for 171.348742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:32.563921   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:32.563931   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.563938   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.563942   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.569072   44266 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:30:32.573329   44266 system_pods.go:59] 17 kube-system pods found
	I0807 18:30:32.573357   44266 system_pods.go:61] "coredns-7db6d8ff4d-rbnrx" [96fa387b-f93b-40df-9ed6-78834f3d02df] Running
	I0807 18:30:32.573361   44266 system_pods.go:61] "coredns-7db6d8ff4d-w6w6g" [143456ef-ffd1-4d42-b9d0-6b778094eca5] Running
	I0807 18:30:32.573364   44266 system_pods.go:61] "etcd-ha-198246" [861c9809-7151-4564-acae-2ad35ada4196] Running
	I0807 18:30:32.573367   44266 system_pods.go:61] "etcd-ha-198246-m02" [af692dc4-ba35-4226-999d-28fa1a44235c] Running
	I0807 18:30:32.573370   44266 system_pods.go:61] "kindnet-8x6fj" [24dceff9-a78c-47c7-9d36-01fbd62ee362] Running
	I0807 18:30:32.573373   44266 system_pods.go:61] "kindnet-sgl8v" [574aa453-48ef-44ff-b10a-13142fc8cf7f] Running
	I0807 18:30:32.573376   44266 system_pods.go:61] "kube-apiserver-ha-198246" [52e51327-3341-452e-b7bd-95a80adde42f] Running
	I0807 18:30:32.573380   44266 system_pods.go:61] "kube-apiserver-ha-198246-m02" [a983198b-7df1-45bb-bd75-61b345d7397c] Running
	I0807 18:30:32.573383   44266 system_pods.go:61] "kube-controller-manager-ha-198246" [73522726-984c-4c1a-9eb6-c0c2eb896b45] Running
	I0807 18:30:32.573386   44266 system_pods.go:61] "kube-controller-manager-ha-198246-m02" [84840391-d86d-45e5-a4f7-6daabbe16557] Running
	I0807 18:30:32.573390   44266 system_pods.go:61] "kube-proxy-4l79v" [649e12b4-4e77-48a9-af9c-691694c4ec99] Running
	I0807 18:30:32.573393   44266 system_pods.go:61] "kube-proxy-m5ng2" [ed3a0c5c-ff85-48e4-9165-329e89fdb64a] Running
	I0807 18:30:32.573396   44266 system_pods.go:61] "kube-scheduler-ha-198246" [dd45e791-8b98-4d64-8131-c2736463faae] Running
	I0807 18:30:32.573398   44266 system_pods.go:61] "kube-scheduler-ha-198246-m02" [f9571af0-65a0-46eb-98ce-d982fa4a2cce] Running
	I0807 18:30:32.573402   44266 system_pods.go:61] "kube-vip-ha-198246" [a230b27d-cbec-4a1a-a7e7-7192f3de3915] Running
	I0807 18:30:32.573405   44266 system_pods.go:61] "kube-vip-ha-198246-m02" [9ef1c5a2-7829-4937-972d-ce53f60064f8] Running
	I0807 18:30:32.573408   44266 system_pods.go:61] "storage-provisioner" [88457253-9aa8-4bd7-974f-1b47b341d40c] Running
	I0807 18:30:32.573414   44266 system_pods.go:74] duration metric: took 180.956026ms to wait for pod list to return data ...
	I0807 18:30:32.573421   44266 default_sa.go:34] waiting for default service account to be created ...
	I0807 18:30:32.763885   44266 request.go:629] Waited for 190.379686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:30:32.763936   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:30:32.763941   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.763948   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.763954   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.767012   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:32.767286   44266 default_sa.go:45] found service account: "default"
	I0807 18:30:32.767313   44266 default_sa.go:55] duration metric: took 193.885113ms for default service account to be created ...
	I0807 18:30:32.767324   44266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 18:30:32.963765   44266 request.go:629] Waited for 196.363852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:32.963831   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:32.963837   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.963844   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.963850   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.970431   44266 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:30:32.975184   44266 system_pods.go:86] 17 kube-system pods found
	I0807 18:30:32.975211   44266 system_pods.go:89] "coredns-7db6d8ff4d-rbnrx" [96fa387b-f93b-40df-9ed6-78834f3d02df] Running
	I0807 18:30:32.975219   44266 system_pods.go:89] "coredns-7db6d8ff4d-w6w6g" [143456ef-ffd1-4d42-b9d0-6b778094eca5] Running
	I0807 18:30:32.975225   44266 system_pods.go:89] "etcd-ha-198246" [861c9809-7151-4564-acae-2ad35ada4196] Running
	I0807 18:30:32.975231   44266 system_pods.go:89] "etcd-ha-198246-m02" [af692dc4-ba35-4226-999d-28fa1a44235c] Running
	I0807 18:30:32.975237   44266 system_pods.go:89] "kindnet-8x6fj" [24dceff9-a78c-47c7-9d36-01fbd62ee362] Running
	I0807 18:30:32.975242   44266 system_pods.go:89] "kindnet-sgl8v" [574aa453-48ef-44ff-b10a-13142fc8cf7f] Running
	I0807 18:30:32.975249   44266 system_pods.go:89] "kube-apiserver-ha-198246" [52e51327-3341-452e-b7bd-95a80adde42f] Running
	I0807 18:30:32.975254   44266 system_pods.go:89] "kube-apiserver-ha-198246-m02" [a983198b-7df1-45bb-bd75-61b345d7397c] Running
	I0807 18:30:32.975261   44266 system_pods.go:89] "kube-controller-manager-ha-198246" [73522726-984c-4c1a-9eb6-c0c2eb896b45] Running
	I0807 18:30:32.975268   44266 system_pods.go:89] "kube-controller-manager-ha-198246-m02" [84840391-d86d-45e5-a4f7-6daabbe16557] Running
	I0807 18:30:32.975277   44266 system_pods.go:89] "kube-proxy-4l79v" [649e12b4-4e77-48a9-af9c-691694c4ec99] Running
	I0807 18:30:32.975284   44266 system_pods.go:89] "kube-proxy-m5ng2" [ed3a0c5c-ff85-48e4-9165-329e89fdb64a] Running
	I0807 18:30:32.975291   44266 system_pods.go:89] "kube-scheduler-ha-198246" [dd45e791-8b98-4d64-8131-c2736463faae] Running
	I0807 18:30:32.975297   44266 system_pods.go:89] "kube-scheduler-ha-198246-m02" [f9571af0-65a0-46eb-98ce-d982fa4a2cce] Running
	I0807 18:30:32.975303   44266 system_pods.go:89] "kube-vip-ha-198246" [a230b27d-cbec-4a1a-a7e7-7192f3de3915] Running
	I0807 18:30:32.975312   44266 system_pods.go:89] "kube-vip-ha-198246-m02" [9ef1c5a2-7829-4937-972d-ce53f60064f8] Running
	I0807 18:30:32.975318   44266 system_pods.go:89] "storage-provisioner" [88457253-9aa8-4bd7-974f-1b47b341d40c] Running
	I0807 18:30:32.975327   44266 system_pods.go:126] duration metric: took 207.996289ms to wait for k8s-apps to be running ...
	I0807 18:30:32.975339   44266 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 18:30:32.975391   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:30:32.989953   44266 system_svc.go:56] duration metric: took 14.606769ms WaitForService to wait for kubelet
	I0807 18:30:32.989979   44266 kubeadm.go:582] duration metric: took 22.700179334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:30:32.989999   44266 node_conditions.go:102] verifying NodePressure condition ...
	I0807 18:30:33.163417   44266 request.go:629] Waited for 173.330443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0807 18:30:33.163468   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0807 18:30:33.163473   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:33.163480   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:33.163484   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:33.167772   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:33.168822   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:30:33.168846   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:30:33.168861   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:30:33.168866   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:30:33.168872   44266 node_conditions.go:105] duration metric: took 178.867475ms to run NodePressure ...
	I0807 18:30:33.168893   44266 start.go:241] waiting for startup goroutines ...
	I0807 18:30:33.168926   44266 start.go:255] writing updated cluster config ...
	I0807 18:30:33.170904   44266 out.go:177] 
	I0807 18:30:33.172264   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:30:33.172352   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:30:33.173860   44266 out.go:177] * Starting "ha-198246-m03" control-plane node in "ha-198246" cluster
	I0807 18:30:33.175358   44266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:30:33.175380   44266 cache.go:56] Caching tarball of preloaded images
	I0807 18:30:33.175467   44266 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 18:30:33.175477   44266 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 18:30:33.175556   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:30:33.175701   44266 start.go:360] acquireMachinesLock for ha-198246-m03: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:30:33.175740   44266 start.go:364] duration metric: took 21.742µs to acquireMachinesLock for "ha-198246-m03"
	I0807 18:30:33.175759   44266 start.go:93] Provisioning new machine with config: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:30:33.175842   44266 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0807 18:30:33.177325   44266 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:30:33.177407   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:30:33.177444   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:30:33.191872   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0807 18:30:33.192346   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:30:33.192788   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:30:33.192811   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:30:33.193150   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:30:33.193346   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetMachineName
	I0807 18:30:33.193498   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:30:33.193662   44266 start.go:159] libmachine.API.Create for "ha-198246" (driver="kvm2")
	I0807 18:30:33.193682   44266 client.go:168] LocalClient.Create starting
	I0807 18:30:33.193707   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem
	I0807 18:30:33.193739   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:30:33.193753   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:30:33.193811   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem
	I0807 18:30:33.193842   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:30:33.193854   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:30:33.193877   44266 main.go:141] libmachine: Running pre-create checks...
	I0807 18:30:33.193888   44266 main.go:141] libmachine: (ha-198246-m03) Calling .PreCreateCheck
	I0807 18:30:33.194049   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetConfigRaw
	I0807 18:30:33.194488   44266 main.go:141] libmachine: Creating machine...
	I0807 18:30:33.194501   44266 main.go:141] libmachine: (ha-198246-m03) Calling .Create
	I0807 18:30:33.194651   44266 main.go:141] libmachine: (ha-198246-m03) Creating KVM machine...
	I0807 18:30:33.195893   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found existing default KVM network
	I0807 18:30:33.196007   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found existing private KVM network mk-ha-198246
	I0807 18:30:33.196136   44266 main.go:141] libmachine: (ha-198246-m03) Setting up store path in /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03 ...
	I0807 18:30:33.196160   44266 main.go:141] libmachine: (ha-198246-m03) Building disk image from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 18:30:33.196236   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:33.196136   45290 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:30:33.196342   44266 main.go:141] libmachine: (ha-198246-m03) Downloading /home/jenkins/minikube-integration/19389-20864/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:30:33.432780   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:33.432647   45290 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa...
	I0807 18:30:33.529287   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:33.529189   45290 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/ha-198246-m03.rawdisk...
	I0807 18:30:33.529318   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Writing magic tar header
	I0807 18:30:33.529332   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Writing SSH key tar header
	I0807 18:30:33.529343   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:33.529299   45290 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03 ...
	I0807 18:30:33.529393   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03
	I0807 18:30:33.529414   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines
	I0807 18:30:33.529433   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03 (perms=drwx------)
	I0807 18:30:33.529447   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:30:33.529464   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864
	I0807 18:30:33.529477   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0807 18:30:33.529492   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines (perms=drwxr-xr-x)
	I0807 18:30:33.529508   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins
	I0807 18:30:33.529523   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube (perms=drwxr-xr-x)
	I0807 18:30:33.529538   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home
	I0807 18:30:33.529554   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864 (perms=drwxrwxr-x)
	I0807 18:30:33.529567   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0807 18:30:33.529579   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Skipping /home - not owner
	I0807 18:30:33.529595   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0807 18:30:33.529611   44266 main.go:141] libmachine: (ha-198246-m03) Creating domain...
	I0807 18:30:33.530487   44266 main.go:141] libmachine: (ha-198246-m03) define libvirt domain using xml: 
	I0807 18:30:33.530514   44266 main.go:141] libmachine: (ha-198246-m03) <domain type='kvm'>
	I0807 18:30:33.530527   44266 main.go:141] libmachine: (ha-198246-m03)   <name>ha-198246-m03</name>
	I0807 18:30:33.530534   44266 main.go:141] libmachine: (ha-198246-m03)   <memory unit='MiB'>2200</memory>
	I0807 18:30:33.530544   44266 main.go:141] libmachine: (ha-198246-m03)   <vcpu>2</vcpu>
	I0807 18:30:33.530555   44266 main.go:141] libmachine: (ha-198246-m03)   <features>
	I0807 18:30:33.530564   44266 main.go:141] libmachine: (ha-198246-m03)     <acpi/>
	I0807 18:30:33.530574   44266 main.go:141] libmachine: (ha-198246-m03)     <apic/>
	I0807 18:30:33.530582   44266 main.go:141] libmachine: (ha-198246-m03)     <pae/>
	I0807 18:30:33.530593   44266 main.go:141] libmachine: (ha-198246-m03)     
	I0807 18:30:33.530604   44266 main.go:141] libmachine: (ha-198246-m03)   </features>
	I0807 18:30:33.530615   44266 main.go:141] libmachine: (ha-198246-m03)   <cpu mode='host-passthrough'>
	I0807 18:30:33.530622   44266 main.go:141] libmachine: (ha-198246-m03)   
	I0807 18:30:33.530630   44266 main.go:141] libmachine: (ha-198246-m03)   </cpu>
	I0807 18:30:33.530637   44266 main.go:141] libmachine: (ha-198246-m03)   <os>
	I0807 18:30:33.530647   44266 main.go:141] libmachine: (ha-198246-m03)     <type>hvm</type>
	I0807 18:30:33.530659   44266 main.go:141] libmachine: (ha-198246-m03)     <boot dev='cdrom'/>
	I0807 18:30:33.530673   44266 main.go:141] libmachine: (ha-198246-m03)     <boot dev='hd'/>
	I0807 18:30:33.530702   44266 main.go:141] libmachine: (ha-198246-m03)     <bootmenu enable='no'/>
	I0807 18:30:33.530724   44266 main.go:141] libmachine: (ha-198246-m03)   </os>
	I0807 18:30:33.530735   44266 main.go:141] libmachine: (ha-198246-m03)   <devices>
	I0807 18:30:33.530748   44266 main.go:141] libmachine: (ha-198246-m03)     <disk type='file' device='cdrom'>
	I0807 18:30:33.530766   44266 main.go:141] libmachine: (ha-198246-m03)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/boot2docker.iso'/>
	I0807 18:30:33.530778   44266 main.go:141] libmachine: (ha-198246-m03)       <target dev='hdc' bus='scsi'/>
	I0807 18:30:33.530790   44266 main.go:141] libmachine: (ha-198246-m03)       <readonly/>
	I0807 18:30:33.530800   44266 main.go:141] libmachine: (ha-198246-m03)     </disk>
	I0807 18:30:33.530813   44266 main.go:141] libmachine: (ha-198246-m03)     <disk type='file' device='disk'>
	I0807 18:30:33.530826   44266 main.go:141] libmachine: (ha-198246-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0807 18:30:33.530840   44266 main.go:141] libmachine: (ha-198246-m03)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/ha-198246-m03.rawdisk'/>
	I0807 18:30:33.530856   44266 main.go:141] libmachine: (ha-198246-m03)       <target dev='hda' bus='virtio'/>
	I0807 18:30:33.530868   44266 main.go:141] libmachine: (ha-198246-m03)     </disk>
	I0807 18:30:33.530892   44266 main.go:141] libmachine: (ha-198246-m03)     <interface type='network'>
	I0807 18:30:33.530906   44266 main.go:141] libmachine: (ha-198246-m03)       <source network='mk-ha-198246'/>
	I0807 18:30:33.530917   44266 main.go:141] libmachine: (ha-198246-m03)       <model type='virtio'/>
	I0807 18:30:33.530927   44266 main.go:141] libmachine: (ha-198246-m03)     </interface>
	I0807 18:30:33.530938   44266 main.go:141] libmachine: (ha-198246-m03)     <interface type='network'>
	I0807 18:30:33.530952   44266 main.go:141] libmachine: (ha-198246-m03)       <source network='default'/>
	I0807 18:30:33.530963   44266 main.go:141] libmachine: (ha-198246-m03)       <model type='virtio'/>
	I0807 18:30:33.530976   44266 main.go:141] libmachine: (ha-198246-m03)     </interface>
	I0807 18:30:33.530986   44266 main.go:141] libmachine: (ha-198246-m03)     <serial type='pty'>
	I0807 18:30:33.530996   44266 main.go:141] libmachine: (ha-198246-m03)       <target port='0'/>
	I0807 18:30:33.531010   44266 main.go:141] libmachine: (ha-198246-m03)     </serial>
	I0807 18:30:33.531020   44266 main.go:141] libmachine: (ha-198246-m03)     <console type='pty'>
	I0807 18:30:33.531031   44266 main.go:141] libmachine: (ha-198246-m03)       <target type='serial' port='0'/>
	I0807 18:30:33.531043   44266 main.go:141] libmachine: (ha-198246-m03)     </console>
	I0807 18:30:33.531053   44266 main.go:141] libmachine: (ha-198246-m03)     <rng model='virtio'>
	I0807 18:30:33.531067   44266 main.go:141] libmachine: (ha-198246-m03)       <backend model='random'>/dev/random</backend>
	I0807 18:30:33.531078   44266 main.go:141] libmachine: (ha-198246-m03)     </rng>
	I0807 18:30:33.531119   44266 main.go:141] libmachine: (ha-198246-m03)     
	I0807 18:30:33.531138   44266 main.go:141] libmachine: (ha-198246-m03)     
	I0807 18:30:33.531151   44266 main.go:141] libmachine: (ha-198246-m03)   </devices>
	I0807 18:30:33.531165   44266 main.go:141] libmachine: (ha-198246-m03) </domain>
	I0807 18:30:33.531182   44266 main.go:141] libmachine: (ha-198246-m03) 
	I0807 18:30:33.537482   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9f:ab:f5 in network default
	I0807 18:30:33.538090   44266 main.go:141] libmachine: (ha-198246-m03) Ensuring networks are active...
	I0807 18:30:33.538108   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:33.538784   44266 main.go:141] libmachine: (ha-198246-m03) Ensuring network default is active
	I0807 18:30:33.539152   44266 main.go:141] libmachine: (ha-198246-m03) Ensuring network mk-ha-198246 is active
	I0807 18:30:33.539485   44266 main.go:141] libmachine: (ha-198246-m03) Getting domain xml...
	I0807 18:30:33.540252   44266 main.go:141] libmachine: (ha-198246-m03) Creating domain...
	I0807 18:30:34.756035   44266 main.go:141] libmachine: (ha-198246-m03) Waiting to get IP...
	I0807 18:30:34.756939   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:34.757511   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:34.757577   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:34.757477   45290 retry.go:31] will retry after 227.908957ms: waiting for machine to come up
	I0807 18:30:34.986907   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:34.987323   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:34.987354   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:34.987276   45290 retry.go:31] will retry after 246.835339ms: waiting for machine to come up
	I0807 18:30:35.235616   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:35.236094   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:35.236119   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:35.236046   45290 retry.go:31] will retry after 426.907083ms: waiting for machine to come up
	I0807 18:30:35.664761   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:35.665183   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:35.665243   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:35.665182   45290 retry.go:31] will retry after 507.132694ms: waiting for machine to come up
	I0807 18:30:36.173688   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:36.174085   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:36.174115   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:36.174025   45290 retry.go:31] will retry after 466.332078ms: waiting for machine to come up
	I0807 18:30:36.642374   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:36.642869   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:36.642896   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:36.642788   45290 retry.go:31] will retry after 802.371451ms: waiting for machine to come up
	I0807 18:30:37.446742   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:37.447182   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:37.447204   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:37.447149   45290 retry.go:31] will retry after 1.058258348s: waiting for machine to come up
	I0807 18:30:38.506869   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:38.507277   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:38.507303   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:38.507243   45290 retry.go:31] will retry after 1.24813663s: waiting for machine to come up
	I0807 18:30:39.757276   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:39.757679   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:39.757708   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:39.757653   45290 retry.go:31] will retry after 1.347201318s: waiting for machine to come up
	I0807 18:30:41.107002   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:41.107475   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:41.107501   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:41.107433   45290 retry.go:31] will retry after 2.164822694s: waiting for machine to come up
	I0807 18:30:43.273615   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:43.274030   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:43.274053   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:43.274008   45290 retry.go:31] will retry after 2.890209035s: waiting for machine to come up
	I0807 18:30:46.165557   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:46.166122   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:46.166152   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:46.166070   45290 retry.go:31] will retry after 3.463040417s: waiting for machine to come up
	I0807 18:30:49.630676   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:49.631090   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:49.631119   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:49.631053   45290 retry.go:31] will retry after 2.865023491s: waiting for machine to come up
	I0807 18:30:52.497203   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:52.497575   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:52.497598   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:52.497535   45290 retry.go:31] will retry after 4.944323257s: waiting for machine to come up
	I0807 18:30:57.446295   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:57.446732   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has current primary IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:57.446753   44266 main.go:141] libmachine: (ha-198246-m03) Found IP for machine: 192.168.39.227
	I0807 18:30:57.446766   44266 main.go:141] libmachine: (ha-198246-m03) Reserving static IP address...
	I0807 18:30:57.447262   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find host DHCP lease matching {name: "ha-198246-m03", mac: "52:54:00:9d:24:52", ip: "192.168.39.227"} in network mk-ha-198246
	I0807 18:30:57.521164   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Getting to WaitForSSH function...
	I0807 18:30:57.521190   44266 main.go:141] libmachine: (ha-198246-m03) Reserved static IP address: 192.168.39.227
	I0807 18:30:57.521199   44266 main.go:141] libmachine: (ha-198246-m03) Waiting for SSH to be available...
	I0807 18:30:57.523681   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:57.524059   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246
	I0807 18:30:57.524105   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find defined IP address of network mk-ha-198246 interface with MAC address 52:54:00:9d:24:52
	I0807 18:30:57.524328   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using SSH client type: external
	I0807 18:30:57.524353   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa (-rw-------)
	I0807 18:30:57.524381   44266 main.go:141] libmachine: (ha-198246-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:30:57.524422   44266 main.go:141] libmachine: (ha-198246-m03) DBG | About to run SSH command:
	I0807 18:30:57.524444   44266 main.go:141] libmachine: (ha-198246-m03) DBG | exit 0
	I0807 18:30:57.529188   44266 main.go:141] libmachine: (ha-198246-m03) DBG | SSH cmd err, output: exit status 255: 
	I0807 18:30:57.529209   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0807 18:30:57.529217   44266 main.go:141] libmachine: (ha-198246-m03) DBG | command : exit 0
	I0807 18:30:57.529223   44266 main.go:141] libmachine: (ha-198246-m03) DBG | err     : exit status 255
	I0807 18:30:57.529230   44266 main.go:141] libmachine: (ha-198246-m03) DBG | output  : 
	I0807 18:31:00.531629   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Getting to WaitForSSH function...
	I0807 18:31:00.534035   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.534413   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:00.534441   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.534511   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using SSH client type: external
	I0807 18:31:00.534527   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa (-rw-------)
	I0807 18:31:00.534578   44266 main.go:141] libmachine: (ha-198246-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:31:00.534598   44266 main.go:141] libmachine: (ha-198246-m03) DBG | About to run SSH command:
	I0807 18:31:00.534623   44266 main.go:141] libmachine: (ha-198246-m03) DBG | exit 0
	I0807 18:31:00.664624   44266 main.go:141] libmachine: (ha-198246-m03) DBG | SSH cmd err, output: <nil>: 
	I0807 18:31:00.664910   44266 main.go:141] libmachine: (ha-198246-m03) KVM machine creation complete!
	I0807 18:31:00.665347   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetConfigRaw
	I0807 18:31:00.665908   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:00.666128   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:00.666310   44266 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 18:31:00.666326   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:31:00.667883   44266 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 18:31:00.667900   44266 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 18:31:00.667908   44266 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 18:31:00.667916   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:00.670520   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.671001   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:00.671032   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.671175   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:00.671364   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.671513   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.671630   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:00.671786   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:00.671980   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:00.671990   44266 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 18:31:00.787597   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:31:00.787623   44266 main.go:141] libmachine: Detecting the provisioner...
	I0807 18:31:00.787633   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:00.790865   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.791362   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:00.791388   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.791510   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:00.791714   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.791937   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.792190   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:00.792379   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:00.792539   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:00.792549   44266 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 18:31:00.909345   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 18:31:00.909405   44266 main.go:141] libmachine: found compatible host: buildroot
	I0807 18:31:00.909414   44266 main.go:141] libmachine: Provisioning with buildroot...
	I0807 18:31:00.909421   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetMachineName
	I0807 18:31:00.909684   44266 buildroot.go:166] provisioning hostname "ha-198246-m03"
	I0807 18:31:00.909709   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetMachineName
	I0807 18:31:00.909928   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:00.913329   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.913773   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:00.913798   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.913978   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:00.914169   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.914339   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.914512   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:00.914692   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:00.914895   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:00.914915   44266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198246-m03 && echo "ha-198246-m03" | sudo tee /etc/hostname
	I0807 18:31:01.046391   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246-m03
	
	I0807 18:31:01.046419   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.049459   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.049924   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.049953   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.050088   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.050268   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.050448   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.050586   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.050755   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:01.050909   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:01.050924   44266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198246-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198246-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198246-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:31:01.178381   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:31:01.178417   44266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 18:31:01.178436   44266 buildroot.go:174] setting up certificates
	I0807 18:31:01.178447   44266 provision.go:84] configureAuth start
	I0807 18:31:01.178459   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetMachineName
	I0807 18:31:01.178749   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:31:01.181683   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.182031   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.182058   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.182247   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.184746   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.185072   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.185101   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.185229   44266 provision.go:143] copyHostCerts
	I0807 18:31:01.185260   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:31:01.185298   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 18:31:01.185309   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:31:01.185381   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 18:31:01.185480   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:31:01.185505   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 18:31:01.185514   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:31:01.185554   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 18:31:01.185619   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:31:01.185643   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 18:31:01.185648   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:31:01.185683   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 18:31:01.185753   44266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.ha-198246-m03 san=[127.0.0.1 192.168.39.227 ha-198246-m03 localhost minikube]
	I0807 18:31:01.354582   44266 provision.go:177] copyRemoteCerts
	I0807 18:31:01.354653   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:31:01.354683   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.357461   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.357784   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.357817   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.358072   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.358268   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.358436   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.358560   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:31:01.447576   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 18:31:01.447656   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:31:01.475031   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 18:31:01.475102   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 18:31:01.501202   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 18:31:01.501289   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 18:31:01.528456   44266 provision.go:87] duration metric: took 349.995722ms to configureAuth
	I0807 18:31:01.528486   44266 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:31:01.528699   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:31:01.528777   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.531665   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.532012   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.532042   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.532225   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.532423   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.532595   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.532702   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.532873   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:01.533031   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:01.533047   44266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 18:31:01.817075   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 18:31:01.817099   44266 main.go:141] libmachine: Checking connection to Docker...
	I0807 18:31:01.817118   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetURL
	I0807 18:31:01.818384   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using libvirt version 6000000
	I0807 18:31:01.821056   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.821418   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.821439   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.821580   44266 main.go:141] libmachine: Docker is up and running!
	I0807 18:31:01.821595   44266 main.go:141] libmachine: Reticulating splines...
	I0807 18:31:01.821603   44266 client.go:171] duration metric: took 28.627914411s to LocalClient.Create
	I0807 18:31:01.821631   44266 start.go:167] duration metric: took 28.627967701s to libmachine.API.Create "ha-198246"
	I0807 18:31:01.821643   44266 start.go:293] postStartSetup for "ha-198246-m03" (driver="kvm2")
	I0807 18:31:01.821659   44266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:31:01.821698   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:01.821917   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:31:01.821940   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.824112   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.824469   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.824488   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.824623   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.824800   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.824973   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.825155   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:31:01.915999   44266 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:31:01.920422   44266 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:31:01.920443   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 18:31:01.920514   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 18:31:01.920605   44266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 18:31:01.920618   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 18:31:01.920730   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:31:01.931294   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:31:01.955945   44266 start.go:296] duration metric: took 134.285824ms for postStartSetup
	I0807 18:31:01.956001   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetConfigRaw
	I0807 18:31:01.956611   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:31:01.959322   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.959688   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.959723   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.959995   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:31:01.960180   44266 start.go:128] duration metric: took 28.784328806s to createHost
	I0807 18:31:01.960233   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.962214   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.962551   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.962579   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.962733   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.962916   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.963080   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.963211   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.963362   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:01.963518   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:01.963528   44266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:31:02.077437   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055462.057555331
	
	I0807 18:31:02.077460   44266 fix.go:216] guest clock: 1723055462.057555331
	I0807 18:31:02.077470   44266 fix.go:229] Guest: 2024-08-07 18:31:02.057555331 +0000 UTC Remote: 2024-08-07 18:31:01.960191536 +0000 UTC m=+220.271212198 (delta=97.363795ms)
	I0807 18:31:02.077490   44266 fix.go:200] guest clock delta is within tolerance: 97.363795ms
	I0807 18:31:02.077497   44266 start.go:83] releasing machines lock for "ha-198246-m03", held for 28.901748397s
	I0807 18:31:02.077520   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:02.077788   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:31:02.081280   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.081885   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:02.081913   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.084121   44266 out.go:177] * Found network options:
	I0807 18:31:02.085422   44266 out.go:177]   - NO_PROXY=192.168.39.196,192.168.39.251
	W0807 18:31:02.086688   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:31:02.086711   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:31:02.086726   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:02.087351   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:02.087542   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:02.087647   44266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 18:31:02.087697   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	W0807 18:31:02.087728   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:31:02.087754   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:31:02.087831   44266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 18:31:02.087877   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:02.090758   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.090950   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.091267   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:02.091288   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.091311   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:02.091327   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.091450   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:02.091624   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:02.091638   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:02.091819   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:02.091826   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:02.091982   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:02.091975   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:31:02.092120   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:31:02.330635   44266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:31:02.338200   44266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:31:02.338275   44266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:31:02.355776   44266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:31:02.355798   44266 start.go:495] detecting cgroup driver to use...
	I0807 18:31:02.355869   44266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:31:02.373960   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:31:02.388788   44266 docker.go:217] disabling cri-docker service (if available) ...
	I0807 18:31:02.388863   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 18:31:02.402456   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 18:31:02.415862   44266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 18:31:02.528910   44266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 18:31:02.692177   44266 docker.go:233] disabling docker service ...
	I0807 18:31:02.692260   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 18:31:02.708366   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 18:31:02.722150   44266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 18:31:02.842254   44266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 18:31:02.963283   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 18:31:02.979860   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:31:03.000776   44266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 18:31:03.000833   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.012949   44266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 18:31:03.013019   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.025364   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.037815   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.050150   44266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:31:03.062786   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.074694   44266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.094223   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.106816   44266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:31:03.117233   44266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 18:31:03.117281   44266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 18:31:03.130652   44266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:31:03.140978   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:31:03.261390   44266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 18:31:03.415655   44266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 18:31:03.415731   44266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 18:31:03.420847   44266 start.go:563] Will wait 60s for crictl version
	I0807 18:31:03.420894   44266 ssh_runner.go:195] Run: which crictl
	I0807 18:31:03.424888   44266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:31:03.466634   44266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 18:31:03.466722   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:31:03.495718   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:31:03.666880   44266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 18:31:03.742658   44266 out.go:177]   - env NO_PROXY=192.168.39.196
	I0807 18:31:03.816001   44266 out.go:177]   - env NO_PROXY=192.168.39.196,192.168.39.251
	I0807 18:31:03.888374   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:31:03.891307   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:03.891715   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:03.891745   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:03.891998   44266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 18:31:03.896652   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:31:03.912117   44266 mustload.go:65] Loading cluster: ha-198246
	I0807 18:31:03.912501   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:31:03.912897   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:31:03.912950   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:31:03.928344   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43499
	I0807 18:31:03.928736   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:31:03.929306   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:31:03.929334   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:31:03.929692   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:31:03.929888   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:31:03.931789   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:31:03.932081   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:31:03.932119   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:31:03.947851   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0807 18:31:03.948291   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:31:03.948876   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:31:03.948893   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:31:03.949204   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:31:03.949455   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:31:03.949625   44266 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246 for IP: 192.168.39.227
	I0807 18:31:03.949636   44266 certs.go:194] generating shared ca certs ...
	I0807 18:31:03.949650   44266 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:03.949763   44266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 18:31:03.949804   44266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 18:31:03.949809   44266 certs.go:256] generating profile certs ...
	I0807 18:31:03.949874   44266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key
	I0807 18:31:03.949895   44266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.1af9f5f5
	I0807 18:31:03.949910   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.1af9f5f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.251 192.168.39.227 192.168.39.254]
	I0807 18:31:04.235062   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.1af9f5f5 ...
	I0807 18:31:04.235104   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.1af9f5f5: {Name:mkc9ab09dfcc0a08e4cded1def253097d11345ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:04.235325   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.1af9f5f5 ...
	I0807 18:31:04.235345   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.1af9f5f5: {Name:mk706ab9d0d4064858493bbf1c933d49d1f0fd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:04.235444   44266 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.1af9f5f5 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt
	I0807 18:31:04.284244   44266 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.1af9f5f5 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key
	I0807 18:31:04.284561   44266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key
	I0807 18:31:04.284585   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:31:04.284607   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:31:04.284635   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:31:04.284654   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:31:04.284671   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:31:04.284704   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:31:04.284726   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:31:04.284747   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:31:04.284824   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 18:31:04.284871   44266 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 18:31:04.284888   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 18:31:04.284977   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 18:31:04.285053   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 18:31:04.285089   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 18:31:04.285156   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:31:04.285203   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 18:31:04.285227   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 18:31:04.285244   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:31:04.285288   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:31:04.288899   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:31:04.289445   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:31:04.289477   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:31:04.289641   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:31:04.289875   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:31:04.290047   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:31:04.290209   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:31:04.368646   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0807 18:31:04.375791   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0807 18:31:04.387726   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0807 18:31:04.392669   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0807 18:31:04.404818   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0807 18:31:04.409538   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0807 18:31:04.423952   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0807 18:31:04.429946   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0807 18:31:04.442196   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0807 18:31:04.447075   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0807 18:31:04.467205   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0807 18:31:04.472136   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0807 18:31:04.484789   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:31:04.513657   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:31:04.541568   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:31:04.570650   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 18:31:04.599209   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0807 18:31:04.624315   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 18:31:04.649418   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:31:04.674771   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 18:31:04.701297   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 18:31:04.728656   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 18:31:04.756136   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:31:04.783116   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0807 18:31:04.800682   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0807 18:31:04.818998   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0807 18:31:04.836194   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0807 18:31:04.854131   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0807 18:31:04.871939   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0807 18:31:04.888443   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0807 18:31:04.905275   44266 ssh_runner.go:195] Run: openssl version
	I0807 18:31:04.911814   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 18:31:04.922949   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 18:31:04.927578   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 18:31:04.927640   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 18:31:04.934032   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 18:31:04.945014   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 18:31:04.957480   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 18:31:04.962404   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 18:31:04.962459   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 18:31:04.968351   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:31:04.980460   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:31:04.992337   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:31:04.997356   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:31:04.997422   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:31:05.003783   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:31:05.015178   44266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:31:05.019430   44266 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:31:05.019490   44266 kubeadm.go:934] updating node {m03 192.168.39.227 8443 v1.30.3 crio true true} ...
	I0807 18:31:05.019580   44266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198246-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:31:05.019607   44266 kube-vip.go:115] generating kube-vip config ...
	I0807 18:31:05.019640   44266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:31:05.036848   44266 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:31:05.036914   44266 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:31:05.036972   44266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:31:05.047848   44266 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0807 18:31:05.047893   44266 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0807 18:31:05.058827   44266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0807 18:31:05.058853   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:31:05.058935   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:31:05.058827   44266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0807 18:31:05.058826   44266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0807 18:31:05.059037   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:31:05.059054   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:31:05.059162   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:31:05.063740   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 18:31:05.063770   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0807 18:31:05.093775   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:31:05.093820   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 18:31:05.093855   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0807 18:31:05.093875   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:31:05.147621   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 18:31:05.147679   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0807 18:31:06.022179   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0807 18:31:06.032561   44266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0807 18:31:06.051718   44266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:31:06.069963   44266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:31:06.088103   44266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:31:06.092277   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:31:06.105287   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:31:06.220917   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:31:06.238937   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:31:06.239328   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:31:06.239375   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:31:06.258371   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0807 18:31:06.258888   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:31:06.259464   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:31:06.259488   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:31:06.259882   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:31:06.260092   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:31:06.260264   44266 start.go:317] joinCluster: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:31:06.260379   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0807 18:31:06.260399   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:31:06.263930   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:31:06.264431   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:31:06.264458   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:31:06.264644   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:31:06.264810   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:31:06.264929   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:31:06.265035   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:31:06.435193   44266 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:31:06.435239   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token od1f23.v6j6x0epna3a85qa --discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-198246-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I0807 18:31:30.206281   44266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token od1f23.v6j6x0epna3a85qa --discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-198246-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (23.77100816s)
	I0807 18:31:30.206317   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0807 18:31:30.813324   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198246-m03 minikube.k8s.io/updated_at=2024_08_07T18_31_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-198246 minikube.k8s.io/primary=false
	I0807 18:31:30.964365   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198246-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0807 18:31:31.090417   44266 start.go:319] duration metric: took 24.830149142s to joinCluster
	I0807 18:31:31.090498   44266 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:31:31.090781   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:31:31.093184   44266 out.go:177] * Verifying Kubernetes components...
	I0807 18:31:31.094437   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:31:31.342260   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:31:31.362745   44266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:31:31.363071   44266 kapi.go:59] client config for ha-198246: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt", KeyFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key", CAFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0807 18:31:31.363166   44266 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.196:8443
	I0807 18:31:31.363437   44266 node_ready.go:35] waiting up to 6m0s for node "ha-198246-m03" to be "Ready" ...
	I0807 18:31:31.363528   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:31.363541   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:31.363551   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:31.363556   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:31.367408   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:31.864633   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:31.864676   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:31.864702   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:31.864711   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:31.868168   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:32.363859   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:32.363895   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:32.363903   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:32.363908   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:32.367827   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:32.863813   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:32.863834   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:32.863841   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:32.863846   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:32.867002   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:33.363594   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:33.363616   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:33.363625   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:33.363631   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:33.368287   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:33.369938   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:33.864014   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:33.864035   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:33.864043   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:33.864050   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:33.868446   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:34.364544   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:34.364563   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:34.364568   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:34.364571   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:34.368487   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:34.863667   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:34.863695   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:34.863705   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:34.863711   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:34.867251   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:35.364368   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:35.364391   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:35.364397   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:35.364405   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:35.368606   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:35.370093   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:35.864081   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:35.864108   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:35.864120   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:35.864126   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:35.867805   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:36.363814   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:36.363838   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:36.363848   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:36.363854   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:36.367626   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:36.863972   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:36.863992   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:36.864000   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:36.864004   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:36.867776   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:37.363945   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:37.363966   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:37.363974   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:37.363977   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:37.367672   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:37.864665   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:37.864704   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:37.864712   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:37.864715   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:37.868330   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:37.869986   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:38.363639   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:38.363660   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:38.363668   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:38.363672   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:38.367008   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:38.863892   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:38.863919   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:38.863931   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:38.863935   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:38.872605   44266 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:31:39.364337   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:39.364368   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:39.364375   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:39.364379   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:39.367983   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:39.863967   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:39.863990   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:39.863999   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:39.864003   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:39.867134   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:40.363638   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:40.363664   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:40.363675   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:40.363680   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:40.367384   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:40.368391   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:40.863639   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:40.863658   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:40.863665   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:40.863669   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:40.866980   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:41.364633   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:41.364655   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:41.364665   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:41.364671   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:41.368280   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:41.864268   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:41.864288   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:41.864297   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:41.864301   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:41.868013   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:42.364521   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:42.364546   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:42.364557   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:42.364564   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:42.367904   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:42.368754   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:42.864040   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:42.864061   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:42.864069   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:42.864073   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:42.867582   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:43.363930   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:43.363950   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:43.363958   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:43.363961   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:43.368329   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:43.864004   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:43.864030   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:43.864042   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:43.864054   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:43.868118   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:44.364372   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:44.364399   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:44.364411   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:44.364416   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:44.367990   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:44.863922   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:44.863945   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:44.863957   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:44.863965   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:44.868231   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:44.869027   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:45.364338   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:45.364359   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:45.364367   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:45.364372   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:45.368558   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:45.863928   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:45.863949   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:45.863957   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:45.863962   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:45.867520   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:46.363984   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:46.364009   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:46.364017   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:46.364022   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:46.367693   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:46.864611   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:46.864635   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:46.864643   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:46.864647   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:46.868195   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:46.869168   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:47.363985   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:47.364006   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:47.364014   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:47.364018   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:47.367513   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:47.863715   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:47.863735   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:47.863743   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:47.863748   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:47.866941   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.364283   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:48.364304   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.364311   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.364315   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.368301   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.864297   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:48.864317   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.864326   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.864332   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.867691   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.868357   44266 node_ready.go:49] node "ha-198246-m03" has status "Ready":"True"
	I0807 18:31:48.868374   44266 node_ready.go:38] duration metric: took 17.504916336s for node "ha-198246-m03" to be "Ready" ...
	I0807 18:31:48.868382   44266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:31:48.868439   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:48.868447   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.868454   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.868458   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.875973   44266 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:31:48.882318   44266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.882408   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rbnrx
	I0807 18:31:48.882420   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.882431   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.882444   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.885507   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.886130   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:48.886147   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.886156   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.886162   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.888994   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.889510   44266 pod_ready.go:92] pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.889528   44266 pod_ready.go:81] duration metric: took 7.186047ms for pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.889537   44266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.889582   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w6w6g
	I0807 18:31:48.889589   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.889596   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.889601   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.893021   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.894159   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:48.894181   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.894188   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.894192   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.896425   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.896893   44266 pod_ready.go:92] pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.896909   44266 pod_ready.go:81] duration metric: took 7.366231ms for pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.896917   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.896961   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246
	I0807 18:31:48.896967   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.896975   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.896982   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.899237   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.899953   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:48.899970   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.899978   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.899983   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.902186   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.902691   44266 pod_ready.go:92] pod "etcd-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.902715   44266 pod_ready.go:81] duration metric: took 5.790956ms for pod "etcd-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.902726   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.902784   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246-m02
	I0807 18:31:48.902795   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.902803   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.902814   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.905329   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.905806   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:48.905821   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.905828   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.905832   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.908047   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.908655   44266 pod_ready.go:92] pod "etcd-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.908670   44266 pod_ready.go:81] duration metric: took 5.936535ms for pod "etcd-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.908678   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.064661   44266 request.go:629] Waited for 155.923893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246-m03
	I0807 18:31:49.064753   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246-m03
	I0807 18:31:49.064759   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.064764   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.064772   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.068282   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:49.265339   44266 request.go:629] Waited for 196.371663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:49.265425   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:49.265438   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.265449   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.265456   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.268957   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:49.269555   44266 pod_ready.go:92] pod "etcd-ha-198246-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:49.269571   44266 pod_ready.go:81] duration metric: took 360.885615ms for pod "etcd-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.269587   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.464819   44266 request.go:629] Waited for 195.162513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246
	I0807 18:31:49.464903   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246
	I0807 18:31:49.464909   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.464916   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.464921   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.469362   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:49.664800   44266 request.go:629] Waited for 194.369823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:49.664876   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:49.664881   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.664887   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.664909   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.668254   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:49.668909   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:49.668928   44266 pod_ready.go:81] duration metric: took 399.332717ms for pod "kube-apiserver-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.668937   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.864884   44266 request.go:629] Waited for 195.895244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m02
	I0807 18:31:49.864939   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m02
	I0807 18:31:49.864944   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.864964   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.864968   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.868343   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.065388   44266 request.go:629] Waited for 196.362909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:50.065438   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:50.065443   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.065450   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.065455   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.069435   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.070338   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:50.070357   44266 pod_ready.go:81] duration metric: took 401.414954ms for pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.070367   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.264445   44266 request.go:629] Waited for 194.01249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m03
	I0807 18:31:50.264517   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m03
	I0807 18:31:50.264525   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.264534   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.264540   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.268180   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.465319   44266 request.go:629] Waited for 196.408254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:50.465387   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:50.465391   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.465398   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.465403   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.468707   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.469431   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:50.469449   44266 pod_ready.go:81] duration metric: took 399.076161ms for pod "kube-apiserver-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.469459   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.664732   44266 request.go:629] Waited for 195.186866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246
	I0807 18:31:50.664805   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246
	I0807 18:31:50.664816   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.664827   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.664835   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.668528   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.864795   44266 request.go:629] Waited for 195.34558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:50.864864   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:50.864871   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.864880   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.864888   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.867688   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:50.868601   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:50.868620   44266 pod_ready.go:81] duration metric: took 399.154742ms for pod "kube-controller-manager-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.868630   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.064678   44266 request.go:629] Waited for 195.987732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m02
	I0807 18:31:51.064754   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m02
	I0807 18:31:51.064761   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.064772   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.064783   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.068355   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:51.265387   44266 request.go:629] Waited for 196.386347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:51.265453   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:51.265460   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.265471   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.265480   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.269137   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:51.269661   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:51.269679   44266 pod_ready.go:81] duration metric: took 401.043609ms for pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.269689   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.465093   44266 request.go:629] Waited for 195.339663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m03
	I0807 18:31:51.465157   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m03
	I0807 18:31:51.465165   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.465174   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.465179   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.468791   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:51.664927   44266 request.go:629] Waited for 195.372605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:51.664995   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:51.665006   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.665017   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.665027   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.668549   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:51.669363   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:51.669381   44266 pod_ready.go:81] duration metric: took 399.686225ms for pod "kube-controller-manager-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.669390   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4l79v" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.865256   44266 request.go:629] Waited for 195.79115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l79v
	I0807 18:31:51.865313   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l79v
	I0807 18:31:51.865320   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.865329   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.865334   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.873470   44266 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:31:52.064458   44266 request.go:629] Waited for 190.295419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:52.064521   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:52.064526   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.064533   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.064538   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.067938   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:52.068805   44266 pod_ready.go:92] pod "kube-proxy-4l79v" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:52.068837   44266 pod_ready.go:81] duration metric: took 399.436427ms for pod "kube-proxy-4l79v" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.068851   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mttr" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.264784   44266 request.go:629] Waited for 195.867102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7mttr
	I0807 18:31:52.264838   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7mttr
	I0807 18:31:52.264843   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.264849   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.264852   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.269765   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:52.464903   44266 request.go:629] Waited for 194.439324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:52.464972   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:52.464983   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.464993   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.465002   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.468248   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:52.468752   44266 pod_ready.go:92] pod "kube-proxy-7mttr" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:52.468774   44266 pod_ready.go:81] duration metric: took 399.914652ms for pod "kube-proxy-7mttr" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.468783   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5ng2" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.664867   44266 request.go:629] Waited for 196.022855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5ng2
	I0807 18:31:52.664951   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5ng2
	I0807 18:31:52.664959   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.664973   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.664988   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.668228   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:52.865340   44266 request.go:629] Waited for 196.363915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:52.865394   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:52.865399   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.865406   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.865411   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.868878   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:52.869535   44266 pod_ready.go:92] pod "kube-proxy-m5ng2" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:52.869556   44266 pod_ready.go:81] duration metric: took 400.766778ms for pod "kube-proxy-m5ng2" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.869565   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.064548   44266 request.go:629] Waited for 194.920878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246
	I0807 18:31:53.064617   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246
	I0807 18:31:53.064625   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.064633   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.064640   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.068146   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:53.265207   44266 request.go:629] Waited for 196.43783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:53.265255   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:53.265260   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.265267   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.265272   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.268523   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:53.269186   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:53.269204   44266 pod_ready.go:81] duration metric: took 399.633139ms for pod "kube-scheduler-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.269217   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.465360   44266 request.go:629] Waited for 196.088508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m02
	I0807 18:31:53.465413   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m02
	I0807 18:31:53.465418   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.465433   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.465450   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.468768   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:53.664761   44266 request.go:629] Waited for 195.371572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:53.664812   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:53.664817   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.664824   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.664827   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.668421   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:53.669073   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:53.669096   44266 pod_ready.go:81] duration metric: took 399.871721ms for pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.669110   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.865211   44266 request.go:629] Waited for 196.027374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m03
	I0807 18:31:53.865290   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m03
	I0807 18:31:53.865298   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.865305   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.865314   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.868661   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:54.064951   44266 request.go:629] Waited for 195.756654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:54.065010   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:54.065018   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.065027   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.065032   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.068111   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:54.068802   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:54.068820   44266 pod_ready.go:81] duration metric: took 399.702974ms for pod "kube-scheduler-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:54.068830   44266 pod_ready.go:38] duration metric: took 5.200435833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:31:54.068843   44266 api_server.go:52] waiting for apiserver process to appear ...
	I0807 18:31:54.068887   44266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:31:54.084598   44266 api_server.go:72] duration metric: took 22.994065627s to wait for apiserver process to appear ...
	I0807 18:31:54.084621   44266 api_server.go:88] waiting for apiserver healthz status ...
	I0807 18:31:54.084641   44266 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0807 18:31:54.090716   44266 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0807 18:31:54.090787   44266 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0807 18:31:54.090798   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.090908   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.090933   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.091732   44266 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0807 18:31:54.091793   44266 api_server.go:141] control plane version: v1.30.3
	I0807 18:31:54.091810   44266 api_server.go:131] duration metric: took 7.181714ms to wait for apiserver health ...
	I0807 18:31:54.091828   44266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 18:31:54.264554   44266 request.go:629] Waited for 172.642251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:54.264604   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:54.264611   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.264621   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.264626   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.272067   44266 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:31:54.279487   44266 system_pods.go:59] 24 kube-system pods found
	I0807 18:31:54.279517   44266 system_pods.go:61] "coredns-7db6d8ff4d-rbnrx" [96fa387b-f93b-40df-9ed6-78834f3d02df] Running
	I0807 18:31:54.279526   44266 system_pods.go:61] "coredns-7db6d8ff4d-w6w6g" [143456ef-ffd1-4d42-b9d0-6b778094eca5] Running
	I0807 18:31:54.279532   44266 system_pods.go:61] "etcd-ha-198246" [861c9809-7151-4564-acae-2ad35ada4196] Running
	I0807 18:31:54.279537   44266 system_pods.go:61] "etcd-ha-198246-m02" [af692dc4-ba35-4226-999d-28fa1a44235c] Running
	I0807 18:31:54.279542   44266 system_pods.go:61] "etcd-ha-198246-m03" [8df491af-6c48-41d6-873f-c1c39afac2f8] Running
	I0807 18:31:54.279547   44266 system_pods.go:61] "kindnet-7854s" [f87d6292-b9b6-4f63-912c-9dfda0471e2e] Running
	I0807 18:31:54.279552   44266 system_pods.go:61] "kindnet-8x6fj" [24dceff9-a78c-47c7-9d36-01fbd62ee362] Running
	I0807 18:31:54.279556   44266 system_pods.go:61] "kindnet-sgl8v" [574aa453-48ef-44ff-b10a-13142fc8cf7f] Running
	I0807 18:31:54.279562   44266 system_pods.go:61] "kube-apiserver-ha-198246" [52e51327-3341-452e-b7bd-95a80adde42f] Running
	I0807 18:31:54.279567   44266 system_pods.go:61] "kube-apiserver-ha-198246-m02" [a983198b-7df1-45bb-bd75-61b345d7397c] Running
	I0807 18:31:54.279573   44266 system_pods.go:61] "kube-apiserver-ha-198246-m03" [c589756a-dda8-44a8-82bb-60532e74eb8b] Running
	I0807 18:31:54.279581   44266 system_pods.go:61] "kube-controller-manager-ha-198246" [73522726-984c-4c1a-9eb6-c0c2eb896b45] Running
	I0807 18:31:54.279587   44266 system_pods.go:61] "kube-controller-manager-ha-198246-m02" [84840391-d86d-45e5-a4f7-6daabbe16557] Running
	I0807 18:31:54.279592   44266 system_pods.go:61] "kube-controller-manager-ha-198246-m03" [5e0d97af-b071-4467-8c3a-dc71f904e84c] Running
	I0807 18:31:54.279597   44266 system_pods.go:61] "kube-proxy-4l79v" [649e12b4-4e77-48a9-af9c-691694c4ec99] Running
	I0807 18:31:54.279602   44266 system_pods.go:61] "kube-proxy-7mttr" [7cb96f6e-47a5-4d6c-a80e-77df1eafc970] Running
	I0807 18:31:54.279608   44266 system_pods.go:61] "kube-proxy-m5ng2" [ed3a0c5c-ff85-48e4-9165-329e89fdb64a] Running
	I0807 18:31:54.279616   44266 system_pods.go:61] "kube-scheduler-ha-198246" [dd45e791-8b98-4d64-8131-c2736463faae] Running
	I0807 18:31:54.279621   44266 system_pods.go:61] "kube-scheduler-ha-198246-m02" [f9571af0-65a0-46eb-98ce-d982fa4a2cce] Running
	I0807 18:31:54.279626   44266 system_pods.go:61] "kube-scheduler-ha-198246-m03" [5fe100c3-b0a4-4499-a7e2-330c88ee8162] Running
	I0807 18:31:54.279633   44266 system_pods.go:61] "kube-vip-ha-198246" [a230b27d-cbec-4a1a-a7e7-7192f3de3915] Running
	I0807 18:31:54.279638   44266 system_pods.go:61] "kube-vip-ha-198246-m02" [9ef1c5a2-7829-4937-972d-ce53f60064f8] Running
	I0807 18:31:54.279643   44266 system_pods.go:61] "kube-vip-ha-198246-m03" [ba0ab294-fb6f-4161-82f7-288a2a0d4f13] Running
	I0807 18:31:54.279649   44266 system_pods.go:61] "storage-provisioner" [88457253-9aa8-4bd7-974f-1b47b341d40c] Running
	I0807 18:31:54.279657   44266 system_pods.go:74] duration metric: took 187.820696ms to wait for pod list to return data ...
	I0807 18:31:54.279670   44266 default_sa.go:34] waiting for default service account to be created ...
	I0807 18:31:54.465078   44266 request.go:629] Waited for 185.333525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:31:54.465131   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:31:54.465136   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.465143   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.465169   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.467798   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:54.467923   44266 default_sa.go:45] found service account: "default"
	I0807 18:31:54.467940   44266 default_sa.go:55] duration metric: took 188.262232ms for default service account to be created ...
	I0807 18:31:54.467950   44266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 18:31:54.664308   44266 request.go:629] Waited for 196.296927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:54.664402   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:54.664413   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.664425   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.664436   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.673358   44266 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:31:54.681646   44266 system_pods.go:86] 24 kube-system pods found
	I0807 18:31:54.681689   44266 system_pods.go:89] "coredns-7db6d8ff4d-rbnrx" [96fa387b-f93b-40df-9ed6-78834f3d02df] Running
	I0807 18:31:54.681698   44266 system_pods.go:89] "coredns-7db6d8ff4d-w6w6g" [143456ef-ffd1-4d42-b9d0-6b778094eca5] Running
	I0807 18:31:54.681710   44266 system_pods.go:89] "etcd-ha-198246" [861c9809-7151-4564-acae-2ad35ada4196] Running
	I0807 18:31:54.681722   44266 system_pods.go:89] "etcd-ha-198246-m02" [af692dc4-ba35-4226-999d-28fa1a44235c] Running
	I0807 18:31:54.681729   44266 system_pods.go:89] "etcd-ha-198246-m03" [8df491af-6c48-41d6-873f-c1c39afac2f8] Running
	I0807 18:31:54.681736   44266 system_pods.go:89] "kindnet-7854s" [f87d6292-b9b6-4f63-912c-9dfda0471e2e] Running
	I0807 18:31:54.681743   44266 system_pods.go:89] "kindnet-8x6fj" [24dceff9-a78c-47c7-9d36-01fbd62ee362] Running
	I0807 18:31:54.681760   44266 system_pods.go:89] "kindnet-sgl8v" [574aa453-48ef-44ff-b10a-13142fc8cf7f] Running
	I0807 18:31:54.681767   44266 system_pods.go:89] "kube-apiserver-ha-198246" [52e51327-3341-452e-b7bd-95a80adde42f] Running
	I0807 18:31:54.681773   44266 system_pods.go:89] "kube-apiserver-ha-198246-m02" [a983198b-7df1-45bb-bd75-61b345d7397c] Running
	I0807 18:31:54.681781   44266 system_pods.go:89] "kube-apiserver-ha-198246-m03" [c589756a-dda8-44a8-82bb-60532e74eb8b] Running
	I0807 18:31:54.681794   44266 system_pods.go:89] "kube-controller-manager-ha-198246" [73522726-984c-4c1a-9eb6-c0c2eb896b45] Running
	I0807 18:31:54.681805   44266 system_pods.go:89] "kube-controller-manager-ha-198246-m02" [84840391-d86d-45e5-a4f7-6daabbe16557] Running
	I0807 18:31:54.681820   44266 system_pods.go:89] "kube-controller-manager-ha-198246-m03" [5e0d97af-b071-4467-8c3a-dc71f904e84c] Running
	I0807 18:31:54.681830   44266 system_pods.go:89] "kube-proxy-4l79v" [649e12b4-4e77-48a9-af9c-691694c4ec99] Running
	I0807 18:31:54.681838   44266 system_pods.go:89] "kube-proxy-7mttr" [7cb96f6e-47a5-4d6c-a80e-77df1eafc970] Running
	I0807 18:31:54.681848   44266 system_pods.go:89] "kube-proxy-m5ng2" [ed3a0c5c-ff85-48e4-9165-329e89fdb64a] Running
	I0807 18:31:54.682159   44266 system_pods.go:89] "kube-scheduler-ha-198246" [dd45e791-8b98-4d64-8131-c2736463faae] Running
	I0807 18:31:54.682175   44266 system_pods.go:89] "kube-scheduler-ha-198246-m02" [f9571af0-65a0-46eb-98ce-d982fa4a2cce] Running
	I0807 18:31:54.682180   44266 system_pods.go:89] "kube-scheduler-ha-198246-m03" [5fe100c3-b0a4-4499-a7e2-330c88ee8162] Running
	I0807 18:31:54.682185   44266 system_pods.go:89] "kube-vip-ha-198246" [a230b27d-cbec-4a1a-a7e7-7192f3de3915] Running
	I0807 18:31:54.682188   44266 system_pods.go:89] "kube-vip-ha-198246-m02" [9ef1c5a2-7829-4937-972d-ce53f60064f8] Running
	I0807 18:31:54.682192   44266 system_pods.go:89] "kube-vip-ha-198246-m03" [ba0ab294-fb6f-4161-82f7-288a2a0d4f13] Running
	I0807 18:31:54.682196   44266 system_pods.go:89] "storage-provisioner" [88457253-9aa8-4bd7-974f-1b47b341d40c] Running
	I0807 18:31:54.682205   44266 system_pods.go:126] duration metric: took 214.246128ms to wait for k8s-apps to be running ...
	I0807 18:31:54.682217   44266 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 18:31:54.682265   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:31:54.698973   44266 system_svc.go:56] duration metric: took 16.748968ms WaitForService to wait for kubelet
	I0807 18:31:54.699002   44266 kubeadm.go:582] duration metric: took 23.60847153s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:31:54.699020   44266 node_conditions.go:102] verifying NodePressure condition ...
	I0807 18:31:54.864327   44266 request.go:629] Waited for 165.224496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0807 18:31:54.864388   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0807 18:31:54.864395   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.864407   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.864413   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.867905   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:54.868930   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:31:54.868950   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:31:54.868961   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:31:54.868964   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:31:54.868968   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:31:54.868971   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:31:54.868974   44266 node_conditions.go:105] duration metric: took 169.949978ms to run NodePressure ...
	I0807 18:31:54.868985   44266 start.go:241] waiting for startup goroutines ...
	I0807 18:31:54.869001   44266 start.go:255] writing updated cluster config ...
	I0807 18:31:54.869277   44266 ssh_runner.go:195] Run: rm -f paused
	I0807 18:31:54.921624   44266 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0807 18:31:54.924829   44266 out.go:177] * Done! kubectl is now configured to use "ha-198246" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.831808734Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f821a2d-d23b-4080-99e1-cbf8a4f5e54e name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.832037588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723055519101351291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319342376725,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319067011712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fcff9b17b4b2366750c04f15288dda856a885fa1e95d4510a83b2b14b855a9,PodSandboxId:885cc92388628d238f8733c8a4e19dbe966de1d74cae5f0b0260d47f543204eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1723055318987833300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723055306768350208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723055302
363392306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305290711d5443ffae9e64678e692b52bbffed39cc06b059026f167d97c5e98d,PodSandboxId:c3113eff4cbeab6d11557ebe28457c4fed8b799968cd7a8112552a9f26c0c7a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172305528372
0347825,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f267a1609da84deb6a231872d87975b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473,PodSandboxId:1fcd84f97f1d17549fda334f2d795061561cad20b325aed47c328b7537d9e461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723055280599506170,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723055280563764082,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723055280588797776,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36,PodSandboxId:30588dee2a435159b1676038c3a1e71d8e794c98f645bd6032392139ac087781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723055280520038813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f821a2d-d23b-4080-99e1-cbf8a4f5e54e name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.871622234Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2a82338-7f10-48f2-b9e5-bbb45e4eb388 name=/runtime.v1.RuntimeService/Version
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.871696208Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2a82338-7f10-48f2-b9e5-bbb45e4eb388 name=/runtime.v1.RuntimeService/Version
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.873015829Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41a4b20e-d479-4d14-a924-c7e1d2985925 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.873536962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723055761873509407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41a4b20e-d479-4d14-a924-c7e1d2985925 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.874196608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3178da30-3df4-42c1-97c1-d04aed2aa6e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.874249256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3178da30-3df4-42c1-97c1-d04aed2aa6e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.874560348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723055519101351291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319342376725,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319067011712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fcff9b17b4b2366750c04f15288dda856a885fa1e95d4510a83b2b14b855a9,PodSandboxId:885cc92388628d238f8733c8a4e19dbe966de1d74cae5f0b0260d47f543204eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1723055318987833300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723055306768350208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723055302
363392306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305290711d5443ffae9e64678e692b52bbffed39cc06b059026f167d97c5e98d,PodSandboxId:c3113eff4cbeab6d11557ebe28457c4fed8b799968cd7a8112552a9f26c0c7a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172305528372
0347825,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f267a1609da84deb6a231872d87975b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473,PodSandboxId:1fcd84f97f1d17549fda334f2d795061561cad20b325aed47c328b7537d9e461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723055280599506170,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723055280563764082,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723055280588797776,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36,PodSandboxId:30588dee2a435159b1676038c3a1e71d8e794c98f645bd6032392139ac087781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723055280520038813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3178da30-3df4-42c1-97c1-d04aed2aa6e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.912298039Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80187ddf-7726-42c1-b0a8-6d02ba51faae name=/runtime.v1.RuntimeService/Version
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.912372568Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80187ddf-7726-42c1-b0a8-6d02ba51faae name=/runtime.v1.RuntimeService/Version
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.914290844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a0ceaff-c77d-4062-9539-12849746a133 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.915504751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723055761915422259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a0ceaff-c77d-4062-9539-12849746a133 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.916005464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b3b5b92-023d-40da-b76e-0a43591a0d9b name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.916057673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b3b5b92-023d-40da-b76e-0a43591a0d9b name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.916553689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723055519101351291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319342376725,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319067011712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fcff9b17b4b2366750c04f15288dda856a885fa1e95d4510a83b2b14b855a9,PodSandboxId:885cc92388628d238f8733c8a4e19dbe966de1d74cae5f0b0260d47f543204eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1723055318987833300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723055306768350208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723055302
363392306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305290711d5443ffae9e64678e692b52bbffed39cc06b059026f167d97c5e98d,PodSandboxId:c3113eff4cbeab6d11557ebe28457c4fed8b799968cd7a8112552a9f26c0c7a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172305528372
0347825,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f267a1609da84deb6a231872d87975b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473,PodSandboxId:1fcd84f97f1d17549fda334f2d795061561cad20b325aed47c328b7537d9e461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723055280599506170,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723055280563764082,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723055280588797776,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36,PodSandboxId:30588dee2a435159b1676038c3a1e71d8e794c98f645bd6032392139ac087781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723055280520038813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b3b5b92-023d-40da-b76e-0a43591a0d9b name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.949734274Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=b37eb974-3653-4b4a-af4e-369dde91fe8d name=/runtime.v1.RuntimeService/Status
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.949812728Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b37eb974-3653-4b4a-af4e-369dde91fe8d name=/runtime.v1.RuntimeService/Status
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.955914414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0e81d24-317d-4a4a-b892-3a32f774b721 name=/runtime.v1.RuntimeService/Version
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.956006092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0e81d24-317d-4a4a-b892-3a32f774b721 name=/runtime.v1.RuntimeService/Version
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.957320693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e64d7bff-f191-45da-bb9e-36e0fb1f7e67 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.957827455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723055761957797732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e64d7bff-f191-45da-bb9e-36e0fb1f7e67 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.958553632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f876fea-c4f6-4904-be4a-8b2351410703 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.958608025Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f876fea-c4f6-4904-be4a-8b2351410703 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:36:01 ha-198246 crio[680]: time="2024-08-07 18:36:01.958855996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723055519101351291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319342376725,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319067011712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fcff9b17b4b2366750c04f15288dda856a885fa1e95d4510a83b2b14b855a9,PodSandboxId:885cc92388628d238f8733c8a4e19dbe966de1d74cae5f0b0260d47f543204eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1723055318987833300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723055306768350208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723055302
363392306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305290711d5443ffae9e64678e692b52bbffed39cc06b059026f167d97c5e98d,PodSandboxId:c3113eff4cbeab6d11557ebe28457c4fed8b799968cd7a8112552a9f26c0c7a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172305528372
0347825,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f267a1609da84deb6a231872d87975b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473,PodSandboxId:1fcd84f97f1d17549fda334f2d795061561cad20b325aed47c328b7537d9e461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723055280599506170,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723055280563764082,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723055280588797776,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36,PodSandboxId:30588dee2a435159b1676038c3a1e71d8e794c98f645bd6032392139ac087781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723055280520038813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f876fea-c4f6-4904-be4a-8b2351410703 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	80335e9819afd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   4d0990efdcee8       busybox-fc5497c4f-chh26
	806c3ba54cd9b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   a5394b2f1434b       coredns-7db6d8ff4d-w6w6g
	3f9784c457acb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   b57adade6ea15       coredns-7db6d8ff4d-rbnrx
	93fcff9b17b4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   885cc92388628       storage-provisioner
	5433090bdddca       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    7 minutes ago       Running             kindnet-cni               0                   bd5d340b4a584       kindnet-sgl8v
	c6c6220e1a7fb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   4aec116af531d       kube-proxy-4l79v
	305290711d544       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   c3113eff4cbea       kube-vip-ha-198246
	4902df4367b62       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago       Running             kube-apiserver            0                   1fcd84f97f1d1       kube-apiserver-ha-198246
	2ff4075c05c48       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago       Running             kube-scheduler            0                   7c56ff7ba09a0       kube-scheduler-ha-198246
	981dfd0662596       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   0e8285057cc05       etcd-ha-198246
	6c84edcc5a98f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago       Running             kube-controller-manager   0                   30588dee2a435       kube-controller-manager-ha-198246
	
	
	==> coredns [3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7] <==
	[INFO] 10.244.1.2:60491 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000238403s
	[INFO] 10.244.1.2:56734 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110021s
	[INFO] 10.244.0.4:60444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100154s
	[INFO] 10.244.0.4:54868 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007045s
	[INFO] 10.244.0.4:55542 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001278843s
	[INFO] 10.244.0.4:41062 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090569s
	[INFO] 10.244.0.4:45221 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159605s
	[INFO] 10.244.0.4:52919 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008416s
	[INFO] 10.244.2.2:57336 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947478s
	[INFO] 10.244.2.2:58778 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148421s
	[INFO] 10.244.2.2:40534 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094901s
	[INFO] 10.244.2.2:34562 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001435891s
	[INFO] 10.244.2.2:40255 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066647s
	[INFO] 10.244.2.2:33303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074642s
	[INFO] 10.244.2.2:54865 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065816s
	[INFO] 10.244.1.2:56362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135028s
	[INFO] 10.244.1.2:50486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103508s
	[INFO] 10.244.0.4:60915 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079398s
	[INFO] 10.244.2.2:36331 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189607s
	[INFO] 10.244.1.2:44020 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000226665s
	[INFO] 10.244.1.2:47459 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129465s
	[INFO] 10.244.0.4:59992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000059798s
	[INFO] 10.244.0.4:55811 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139124s
	[INFO] 10.244.2.2:42718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132316s
	[INFO] 10.244.2.2:34338 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147334s
	
	
	==> coredns [806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab] <==
	[INFO] 10.244.0.4:54342 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000106253s
	[INFO] 10.244.2.2:37220 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00009521s
	[INFO] 10.244.2.2:40447 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001945707s
	[INFO] 10.244.2.2:46546 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.003736918s
	[INFO] 10.244.1.2:40239 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121833s
	[INFO] 10.244.1.2:39185 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003274854s
	[INFO] 10.244.1.2:32995 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301562s
	[INFO] 10.244.1.2:57764 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00324711s
	[INFO] 10.244.0.4:43175 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001969935s
	[INFO] 10.244.0.4:47947 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090373s
	[INFO] 10.244.2.2:59435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185691s
	[INFO] 10.244.1.2:41342 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215074s
	[INFO] 10.244.1.2:58323 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133762s
	[INFO] 10.244.0.4:48395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131554s
	[INFO] 10.244.0.4:33157 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121525s
	[INFO] 10.244.0.4:53506 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084053s
	[INFO] 10.244.2.2:47826 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205944s
	[INFO] 10.244.2.2:43418 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113361s
	[INFO] 10.244.2.2:53197 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103281s
	[INFO] 10.244.1.2:51874 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001263s
	[INFO] 10.244.1.2:40094 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205313s
	[INFO] 10.244.0.4:55591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001033s
	[INFO] 10.244.0.4:41281 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083191s
	[INFO] 10.244.2.2:52214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093799s
	[INFO] 10.244.2.2:55578 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146065s
	
	
	==> describe nodes <==
	Name:               ha-198246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T18_28_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:28:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:35:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:32:12 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:32:12 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:32:12 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:32:12 +0000   Wed, 07 Aug 2024 18:28:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    ha-198246
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e31604902e0745d1a1407795d2ccbfc0
	  System UUID:                e3160490-2e07-45d1-a140-7795d2ccbfc0
	  Boot ID:                    9b0f1850-84af-432c-85c0-f24cda670347
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-chh26              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 coredns-7db6d8ff4d-rbnrx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m42s
	  kube-system                 coredns-7db6d8ff4d-w6w6g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m42s
	  kube-system                 etcd-ha-198246                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m58s
	  kube-system                 kindnet-sgl8v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m42s
	  kube-system                 kube-apiserver-ha-198246             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-controller-manager-ha-198246    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-proxy-4l79v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  kube-system                 kube-scheduler-ha-198246             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-vip-ha-198246                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m39s  kube-proxy       
	  Normal  Starting                 7m56s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m56s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m56s  kubelet          Node ha-198246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m56s  kubelet          Node ha-198246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m56s  kubelet          Node ha-198246 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m42s  node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal  NodeReady                7m24s  kubelet          Node ha-198246 status is now: NodeReady
	  Normal  RegisteredNode           5m37s  node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal  RegisteredNode           4m18s  node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	
	
	Name:               ha-198246-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_30_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:30:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:33:31 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 07 Aug 2024 18:32:09 +0000   Wed, 07 Aug 2024 18:34:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 07 Aug 2024 18:32:09 +0000   Wed, 07 Aug 2024 18:34:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 07 Aug 2024 18:32:09 +0000   Wed, 07 Aug 2024 18:34:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 07 Aug 2024 18:32:09 +0000   Wed, 07 Aug 2024 18:34:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-198246-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8eadf45fa3a45c1ace8b37287f97c9d
	  System UUID:                b8eadf45-fa3a-45c1-ace8-b37287f97c9d
	  Boot ID:                    7900c294-c092-44a8-b18b-e0879a5b10ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g62d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 etcd-ha-198246-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m53s
	  kube-system                 kindnet-8x6fj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m55s
	  kube-system                 kube-apiserver-ha-198246-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-controller-manager-ha-198246-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 kube-proxy-m5ng2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-scheduler-ha-198246-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                 kube-vip-ha-198246-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet          Node ha-198246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x8 over 5m55s)  kubelet          Node ha-198246-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x7 over 5m55s)  kubelet          Node ha-198246-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m52s                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-198246-m02 status is now: NodeNotReady
	
	
	Name:               ha-198246-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_31_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:31:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:35:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:32:28 +0000   Wed, 07 Aug 2024 18:31:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:32:28 +0000   Wed, 07 Aug 2024 18:31:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:32:28 +0000   Wed, 07 Aug 2024 18:31:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:32:28 +0000   Wed, 07 Aug 2024 18:31:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-198246-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60409ac81f5346078f5f2d7599678540
	  System UUID:                60409ac8-1f53-4607-8f5f-2d7599678540
	  Boot ID:                    30ed0e62-43cd-4d25-85c3-6ffd341eb52a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k2t25                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 etcd-ha-198246-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m34s
	  kube-system                 kindnet-7854s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m36s
	  kube-system                 kube-apiserver-ha-198246-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-controller-manager-ha-198246-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-proxy-7mttr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-scheduler-ha-198246-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-vip-ha-198246-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m30s                  kube-proxy       
	  Normal  NodeHasSufficientPID     4m36s (x7 over 4m36s)  kubelet          Node ha-198246-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m35s (x8 over 4m36s)  kubelet          Node ha-198246-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s (x8 over 4m36s)  kubelet          Node ha-198246-m03 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	
	
	Name:               ha-198246-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_32_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:32:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:35:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:33:21 +0000   Wed, 07 Aug 2024 18:32:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:33:21 +0000   Wed, 07 Aug 2024 18:32:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:33:21 +0000   Wed, 07 Aug 2024 18:32:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:33:21 +0000   Wed, 07 Aug 2024 18:33:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-198246-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e050b6016e8b45679acbdd2b5c7bde62
	  System UUID:                e050b601-6e8b-4567-9acb-dd2b5c7bde62
	  Boot ID:                    3b3e9caf-949c-417a-90da-edc98697cdac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5vj44       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m30s
	  kube-system                 kube-proxy-5ggpl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m30s (x2 over 3m30s)  kubelet          Node ha-198246-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m30s (x2 over 3m30s)  kubelet          Node ha-198246-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m30s (x2 over 3m30s)  kubelet          Node ha-198246-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m28s                  node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal  RegisteredNode           3m27s                  node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal  RegisteredNode           3m27s                  node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-198246-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 7 18:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050670] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040191] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.791892] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.561405] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.603000] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.529902] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.057949] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071605] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.183672] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.110780] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.300871] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.248154] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.501138] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.062750] kauditd_printk_skb: 158 callbacks suppressed
	[Aug 7 18:28] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.095778] kauditd_printk_skb: 79 callbacks suppressed
	[ +15.277376] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.193932] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 7 18:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5] <==
	{"level":"warn","ts":"2024-08-07T18:36:02.300601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.312879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.323028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.329567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.333644Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.340386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.344385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.351961Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.360025Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.36382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.366965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.377064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.384615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.391428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.396189Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.399383Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.406396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.420327Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.424785Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.440706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.443085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.455405Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.460725Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.488994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:36:02.540651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:36:02 up 8 min,  0 users,  load average: 0.33, 0.29, 0.17
	Linux ha-198246 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554] <==
	I0807 18:35:28.091126       1 main.go:299] handling current node
	I0807 18:35:38.097627       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:35:38.098595       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:35:38.098913       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:35:38.099278       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:35:38.099617       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:35:38.099683       1 main.go:299] handling current node
	I0807 18:35:38.099730       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:35:38.099757       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:35:48.100006       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:35:48.100068       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:35:48.100219       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:35:48.100245       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:35:48.100312       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:35:48.100318       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:35:48.100371       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:35:48.100377       1 main.go:299] handling current node
	I0807 18:35:58.090836       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:35:58.090977       1 main.go:299] handling current node
	I0807 18:35:58.091007       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:35:58.091026       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:35:58.091175       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:35:58.091196       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:35:58.091280       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:35:58.091299       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473] <==
	I0807 18:28:05.757651       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0807 18:28:05.765720       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.196]
	I0807 18:28:05.766882       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 18:28:05.772395       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0807 18:28:05.830060       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 18:28:06.776266       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 18:28:06.809546       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0807 18:28:06.821673       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 18:28:20.248559       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0807 18:28:20.348011       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0807 18:32:00.535866       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57744: use of closed network connection
	E0807 18:32:00.744066       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57766: use of closed network connection
	E0807 18:32:00.952672       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57792: use of closed network connection
	E0807 18:32:01.172355       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57812: use of closed network connection
	E0807 18:32:01.352150       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57828: use of closed network connection
	E0807 18:32:01.532194       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57846: use of closed network connection
	E0807 18:32:01.714325       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57860: use of closed network connection
	E0807 18:32:01.900647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57872: use of closed network connection
	E0807 18:32:02.087553       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57896: use of closed network connection
	E0807 18:32:02.383817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57920: use of closed network connection
	E0807 18:32:02.568053       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57936: use of closed network connection
	E0807 18:32:02.768857       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57942: use of closed network connection
	E0807 18:32:02.971250       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57960: use of closed network connection
	E0807 18:32:03.156171       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57992: use of closed network connection
	E0807 18:32:03.335581       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58008: use of closed network connection
	
	
	==> kube-controller-manager [6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36] <==
	I0807 18:31:30.326416       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198246-m03"
	I0807 18:31:55.863780       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.509454ms"
	I0807 18:31:55.909903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.038129ms"
	I0807 18:31:56.006853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.823451ms"
	I0807 18:31:56.148782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.712519ms"
	I0807 18:31:56.149891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="322.281µs"
	I0807 18:31:56.191596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.550452ms"
	I0807 18:31:56.191748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.564µs"
	I0807 18:31:56.760379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.592µs"
	I0807 18:31:56.902720       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.37µs"
	I0807 18:31:57.234083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.456µs"
	I0807 18:31:59.698073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.032139ms"
	I0807 18:31:59.698278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.373µs"
	I0807 18:31:59.804042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.215774ms"
	I0807 18:31:59.804158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.476µs"
	I0807 18:32:00.080762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.328982ms"
	I0807 18:32:00.082206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.736µs"
	E0807 18:32:32.063101       1 certificate_controller.go:146] Sync csr-btqqk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-btqqk": the object has been modified; please apply your changes to the latest version and try again
	I0807 18:32:32.310340       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198246-m04\" does not exist"
	I0807 18:32:32.379172       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198246-m04" podCIDRs=["10.244.3.0/24"]
	I0807 18:32:35.352861       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198246-m04"
	I0807 18:33:21.056413       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-198246-m04"
	I0807 18:34:14.817275       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-198246-m04"
	I0807 18:34:14.871985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.917944ms"
	I0807 18:34:14.873327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.625µs"
	
	
	==> kube-proxy [c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8] <==
	I0807 18:28:22.580618       1 server_linux.go:69] "Using iptables proxy"
	I0807 18:28:22.601637       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	I0807 18:28:22.654297       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 18:28:22.654381       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 18:28:22.654403       1 server_linux.go:165] "Using iptables Proxier"
	I0807 18:28:22.658197       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 18:28:22.658748       1 server.go:872] "Version info" version="v1.30.3"
	I0807 18:28:22.658783       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:28:22.661148       1 config.go:192] "Starting service config controller"
	I0807 18:28:22.661385       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 18:28:22.661502       1 config.go:101] "Starting endpoint slice config controller"
	I0807 18:28:22.661508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 18:28:22.662750       1 config.go:319] "Starting node config controller"
	I0807 18:28:22.662780       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 18:28:22.761662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 18:28:22.761768       1 shared_informer.go:320] Caches are synced for service config
	I0807 18:28:22.763105       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56] <==
	E0807 18:28:05.163012       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 18:28:05.164577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 18:28:05.164616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0807 18:28:05.283884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 18:28:05.283932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0807 18:28:05.320413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 18:28:05.320504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:28:05.373610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 18:28:05.373694       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0807 18:28:06.678552       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0807 18:32:32.502898       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-z8cdn\": pod kindnet-z8cdn is already assigned to node \"ha-198246-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-z8cdn" node="ha-198246-m04"
	E0807 18:32:32.503513       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bc6ed049-d9fb-4132-b192-8015240cb919(kube-system/kindnet-z8cdn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-z8cdn"
	E0807 18:32:32.503593       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-z8cdn\": pod kindnet-z8cdn is already assigned to node \"ha-198246-m04\"" pod="kube-system/kindnet-z8cdn"
	I0807 18:32:32.503644       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-z8cdn" node="ha-198246-m04"
	E0807 18:32:32.551938       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cv65q\": pod kube-proxy-cv65q is already assigned to node \"ha-198246-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cv65q" node="ha-198246-m04"
	E0807 18:32:32.553290       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cv65q\": pod kube-proxy-cv65q is already assigned to node \"ha-198246-m04\"" pod="kube-system/kube-proxy-cv65q"
	E0807 18:32:32.556989       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vndzm\": pod kindnet-vndzm is already assigned to node \"ha-198246-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vndzm" node="ha-198246-m04"
	E0807 18:32:32.557081       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vndzm\": pod kindnet-vndzm is already assigned to node \"ha-198246-m04\"" pod="kube-system/kindnet-vndzm"
	E0807 18:32:36.244172       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5ggpl\": pod kube-proxy-5ggpl is already assigned to node \"ha-198246-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5ggpl" node="ha-198246-m04"
	E0807 18:32:36.244315       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2ed71e43-edd6-4262-a1ed-a3232e717574(kube-system/kube-proxy-5ggpl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5ggpl"
	E0807 18:32:36.244399       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5ggpl\": pod kube-proxy-5ggpl is already assigned to node \"ha-198246-m04\"" pod="kube-system/kube-proxy-5ggpl"
	I0807 18:32:36.245064       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5ggpl" node="ha-198246-m04"
	E0807 18:32:36.281841       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tdszb\": pod kube-proxy-tdszb is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kube-proxy-tdszb" node="ha-198246-m04"
	E0807 18:32:36.281939       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tdszb\": pod kube-proxy-tdszb is being deleted, cannot be assigned to a host" pod="kube-system/kube-proxy-tdszb"
	E0807 18:32:36.330630       1 schedule_one.go:1095] "Error updating pod" err="pods \"kube-proxy-tdszb\" not found" pod="kube-system/kube-proxy-tdszb"
	
	
	==> kubelet <==
	Aug 07 18:31:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:31:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:31:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:31:55 ha-198246 kubelet[1372]: I0807 18:31:55.870940    1372 topology_manager.go:215] "Topology Admit Handler" podUID="42848aea-5e18-4f5c-b59d-f615d5128a74" podNamespace="default" podName="busybox-fc5497c4f-chh26"
	Aug 07 18:31:55 ha-198246 kubelet[1372]: I0807 18:31:55.995681    1372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdsts\" (UniqueName: \"kubernetes.io/projected/42848aea-5e18-4f5c-b59d-f615d5128a74-kube-api-access-mdsts\") pod \"busybox-fc5497c4f-chh26\" (UID: \"42848aea-5e18-4f5c-b59d-f615d5128a74\") " pod="default/busybox-fc5497c4f-chh26"
	Aug 07 18:32:06 ha-198246 kubelet[1372]: E0807 18:32:06.768650    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:32:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:32:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:32:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:32:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:33:06 ha-198246 kubelet[1372]: E0807 18:33:06.768553    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:33:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:33:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:33:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:33:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:34:06 ha-198246 kubelet[1372]: E0807 18:34:06.758391    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:34:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:34:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:34:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:34:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:35:06 ha-198246 kubelet[1372]: E0807 18:35:06.757102    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:35:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:35:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:35:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:35:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198246 -n ha-198246
helpers_test.go:261: (dbg) Run:  kubectl --context ha-198246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 3 (3.192257422s)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-198246-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:36:07.145309   49432 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:36:07.145590   49432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:07.145601   49432 out.go:304] Setting ErrFile to fd 2...
	I0807 18:36:07.145607   49432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:07.145836   49432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:36:07.146028   49432 out.go:298] Setting JSON to false
	I0807 18:36:07.146057   49432 mustload.go:65] Loading cluster: ha-198246
	I0807 18:36:07.146161   49432 notify.go:220] Checking for updates...
	I0807 18:36:07.146470   49432 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:36:07.146486   49432 status.go:255] checking status of ha-198246 ...
	I0807 18:36:07.146902   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:07.146964   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:07.165088   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38645
	I0807 18:36:07.165597   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:07.166153   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:07.166172   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:07.166559   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:07.166753   49432 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:36:07.168475   49432 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:36:07.168489   49432 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:07.168810   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:07.168867   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:07.184293   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
	I0807 18:36:07.184792   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:07.185322   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:07.185345   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:07.185637   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:07.185915   49432 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:36:07.189611   49432 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:07.190150   49432 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:07.190184   49432 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:07.190326   49432 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:07.190666   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:07.190707   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:07.206275   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35897
	I0807 18:36:07.206729   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:07.207301   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:07.207344   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:07.207666   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:07.207845   49432 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:36:07.208158   49432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:07.208185   49432 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:36:07.212082   49432 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:07.212627   49432 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:07.212655   49432 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:07.212971   49432 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:36:07.213185   49432 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:36:07.213458   49432 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:36:07.213615   49432 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:36:07.304775   49432 ssh_runner.go:195] Run: systemctl --version
	I0807 18:36:07.312278   49432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:07.327385   49432 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:07.327411   49432 api_server.go:166] Checking apiserver status ...
	I0807 18:36:07.327439   49432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:07.342324   49432 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0807 18:36:07.353473   49432 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:07.353534   49432 ssh_runner.go:195] Run: ls
	I0807 18:36:07.359369   49432 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:07.363934   49432 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:07.363959   49432 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:36:07.363971   49432 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:07.363992   49432 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:36:07.364342   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:07.364385   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:07.379363   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34341
	I0807 18:36:07.379787   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:07.380319   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:07.380342   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:07.380603   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:07.380878   49432 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:36:07.382612   49432 status.go:330] ha-198246-m02 host status = "Running" (err=<nil>)
	I0807 18:36:07.382641   49432 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:07.382919   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:07.382981   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:07.397768   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
	I0807 18:36:07.398286   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:07.398867   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:07.398886   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:07.399256   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:07.399570   49432 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:36:07.402388   49432 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:07.402873   49432 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:07.402911   49432 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:07.403070   49432 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:07.403370   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:07.403423   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:07.419635   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
	I0807 18:36:07.420125   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:07.420672   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:07.420698   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:07.421029   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:07.421224   49432 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:36:07.421397   49432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:07.421416   49432 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:36:07.424289   49432 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:07.424770   49432 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:07.424793   49432 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:07.425018   49432 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:36:07.425233   49432 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:36:07.425424   49432 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:36:07.425568   49432 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	W0807 18:36:09.924578   49432 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	W0807 18:36:09.924664   49432 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	E0807 18:36:09.924677   49432 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:09.924692   49432 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0807 18:36:09.924709   49432 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:09.924716   49432 status.go:255] checking status of ha-198246-m03 ...
	I0807 18:36:09.925032   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:09.925083   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:09.940029   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0807 18:36:09.940459   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:09.940937   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:09.940957   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:09.941346   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:09.941598   49432 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:36:09.943278   49432 status.go:330] ha-198246-m03 host status = "Running" (err=<nil>)
	I0807 18:36:09.943301   49432 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:09.943704   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:09.943748   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:09.959498   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37337
	I0807 18:36:09.959888   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:09.960375   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:09.960394   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:09.960694   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:09.960929   49432 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:36:09.964061   49432 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:09.964537   49432 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:09.964568   49432 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:09.964749   49432 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:09.965057   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:09.965091   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:09.980393   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I0807 18:36:09.980811   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:09.981299   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:09.981324   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:09.981644   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:09.981843   49432 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:36:09.982025   49432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:09.982042   49432 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:36:09.984764   49432 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:09.985202   49432 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:09.985231   49432 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:09.985371   49432 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:36:09.985522   49432 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:36:09.985686   49432 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:36:09.985817   49432 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:36:10.076869   49432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:10.093118   49432 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:10.093148   49432 api_server.go:166] Checking apiserver status ...
	I0807 18:36:10.093209   49432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:10.107695   49432 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0807 18:36:10.117766   49432 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:10.117829   49432 ssh_runner.go:195] Run: ls
	I0807 18:36:10.127505   49432 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:10.131546   49432 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:10.131583   49432 status.go:422] ha-198246-m03 apiserver status = Running (err=<nil>)
	I0807 18:36:10.131591   49432 status.go:257] ha-198246-m03 status: &{Name:ha-198246-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:10.131606   49432 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:36:10.131899   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:10.131933   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:10.146922   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0807 18:36:10.147437   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:10.147877   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:10.147902   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:10.148235   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:10.148449   49432 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:36:10.150103   49432 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:36:10.150131   49432 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:10.150541   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:10.150593   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:10.165969   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33791
	I0807 18:36:10.166432   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:10.166961   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:10.166982   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:10.167327   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:10.167532   49432 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:36:10.170673   49432 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:10.171110   49432 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:10.171129   49432 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:10.171252   49432 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:10.171538   49432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:10.171606   49432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:10.187725   49432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37251
	I0807 18:36:10.188165   49432 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:10.188684   49432 main.go:141] libmachine: Using API Version  1
	I0807 18:36:10.188707   49432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:10.189051   49432 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:10.189258   49432 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:36:10.189453   49432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:10.189473   49432 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:36:10.192836   49432 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:10.193335   49432 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:10.193351   49432 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:10.193511   49432 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:36:10.193696   49432 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:36:10.193863   49432 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:36:10.194000   49432 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:36:10.279879   49432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:10.295745   49432 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 3 (2.549644818s)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-198246-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:36:10.867392   49517 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:36:10.867523   49517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:10.867534   49517 out.go:304] Setting ErrFile to fd 2...
	I0807 18:36:10.867540   49517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:10.867745   49517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:36:10.867922   49517 out.go:298] Setting JSON to false
	I0807 18:36:10.867951   49517 mustload.go:65] Loading cluster: ha-198246
	I0807 18:36:10.867999   49517 notify.go:220] Checking for updates...
	I0807 18:36:10.868389   49517 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:36:10.868410   49517 status.go:255] checking status of ha-198246 ...
	I0807 18:36:10.868873   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:10.868934   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:10.887497   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0807 18:36:10.887958   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:10.888593   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:10.888623   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:10.888981   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:10.889194   49517 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:36:10.890768   49517 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:36:10.890781   49517 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:10.891064   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:10.891096   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:10.906634   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42657
	I0807 18:36:10.907028   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:10.907508   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:10.907526   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:10.907813   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:10.907995   49517 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:36:10.910648   49517 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:10.911095   49517 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:10.911121   49517 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:10.911307   49517 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:10.911627   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:10.911672   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:10.927150   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
	I0807 18:36:10.927581   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:10.928083   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:10.928100   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:10.928507   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:10.928724   49517 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:36:10.928972   49517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:10.929003   49517 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:36:10.931987   49517 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:10.932427   49517 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:10.932453   49517 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:10.932571   49517 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:36:10.932752   49517 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:36:10.933025   49517 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:36:10.933176   49517 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:36:11.016984   49517 ssh_runner.go:195] Run: systemctl --version
	I0807 18:36:11.023550   49517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:11.038968   49517 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:11.039004   49517 api_server.go:166] Checking apiserver status ...
	I0807 18:36:11.039061   49517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:11.053653   49517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0807 18:36:11.064978   49517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:11.065045   49517 ssh_runner.go:195] Run: ls
	I0807 18:36:11.070768   49517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:11.076906   49517 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:11.076933   49517 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:36:11.076947   49517 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:11.076972   49517 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:36:11.077295   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:11.077337   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:11.093511   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34323
	I0807 18:36:11.093966   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:11.094505   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:11.094531   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:11.094822   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:11.095018   49517 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:36:11.096724   49517 status.go:330] ha-198246-m02 host status = "Running" (err=<nil>)
	I0807 18:36:11.096741   49517 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:11.097068   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:11.097104   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:11.111801   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
	I0807 18:36:11.112224   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:11.112618   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:11.112638   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:11.112977   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:11.113161   49517 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:36:11.116037   49517 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:11.116505   49517 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:11.116530   49517 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:11.116686   49517 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:11.117014   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:11.117059   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:11.135005   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33267
	I0807 18:36:11.135446   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:11.135973   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:11.136001   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:11.136364   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:11.136550   49517 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:36:11.136732   49517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:11.136753   49517 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:36:11.139748   49517 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:11.140191   49517 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:11.140236   49517 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:11.140491   49517 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:36:11.140648   49517 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:36:11.140809   49517 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:36:11.140921   49517 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	W0807 18:36:12.996470   49517 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	W0807 18:36:12.996556   49517 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	E0807 18:36:12.996574   49517 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:12.996583   49517 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0807 18:36:12.996599   49517 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:12.996606   49517 status.go:255] checking status of ha-198246-m03 ...
	I0807 18:36:12.996940   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:12.996994   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:13.012370   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0807 18:36:13.012915   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:13.013434   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:13.013460   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:13.013772   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:13.014000   49517 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:36:13.015460   49517 status.go:330] ha-198246-m03 host status = "Running" (err=<nil>)
	I0807 18:36:13.015478   49517 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:13.016058   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:13.016111   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:13.031048   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I0807 18:36:13.031456   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:13.031904   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:13.031923   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:13.032282   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:13.032451   49517 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:36:13.035531   49517 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:13.036027   49517 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:13.036057   49517 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:13.036164   49517 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:13.036496   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:13.036537   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:13.051817   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34013
	I0807 18:36:13.052260   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:13.052732   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:13.052755   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:13.053066   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:13.053349   49517 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:36:13.053597   49517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:13.053620   49517 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:36:13.056647   49517 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:13.057119   49517 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:13.057145   49517 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:13.057357   49517 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:36:13.057501   49517 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:36:13.057736   49517 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:36:13.057920   49517 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:36:13.148331   49517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:13.164709   49517 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:13.164737   49517 api_server.go:166] Checking apiserver status ...
	I0807 18:36:13.164768   49517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:13.187428   49517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0807 18:36:13.200495   49517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:13.200556   49517 ssh_runner.go:195] Run: ls
	I0807 18:36:13.205784   49517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:13.210192   49517 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:13.210216   49517 status.go:422] ha-198246-m03 apiserver status = Running (err=<nil>)
	I0807 18:36:13.210224   49517 status.go:257] ha-198246-m03 status: &{Name:ha-198246-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:13.210239   49517 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:36:13.210528   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:13.210561   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:13.226960   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0807 18:36:13.227375   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:13.227801   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:13.227822   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:13.228194   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:13.228420   49517 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:36:13.229860   49517 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:36:13.229874   49517 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:13.230216   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:13.230251   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:13.245403   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34039
	I0807 18:36:13.245916   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:13.246448   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:13.246471   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:13.246798   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:13.247008   49517 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:36:13.250187   49517 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:13.250559   49517 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:13.250599   49517 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:13.250846   49517 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:13.251163   49517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:13.251198   49517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:13.267449   49517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0807 18:36:13.267873   49517 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:13.268390   49517 main.go:141] libmachine: Using API Version  1
	I0807 18:36:13.268419   49517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:13.268774   49517 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:13.268986   49517 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:36:13.269190   49517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:13.269211   49517 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:36:13.272320   49517 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:13.272867   49517 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:13.272903   49517 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:13.273070   49517 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:36:13.273251   49517 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:36:13.273427   49517 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:36:13.273623   49517 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:36:13.359981   49517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:13.374757   49517 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 3 (4.739977448s)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-198246-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:36:14.966384   49618 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:36:14.966482   49618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:14.966488   49618 out.go:304] Setting ErrFile to fd 2...
	I0807 18:36:14.966494   49618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:14.966730   49618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:36:14.966940   49618 out.go:298] Setting JSON to false
	I0807 18:36:14.966965   49618 mustload.go:65] Loading cluster: ha-198246
	I0807 18:36:14.967075   49618 notify.go:220] Checking for updates...
	I0807 18:36:14.967468   49618 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:36:14.967490   49618 status.go:255] checking status of ha-198246 ...
	I0807 18:36:14.967971   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:14.968035   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:14.985499   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0807 18:36:14.985956   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:14.986534   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:14.986560   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:14.986956   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:14.987205   49618 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:36:14.988815   49618 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:36:14.988832   49618 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:14.989163   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:14.989206   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:15.004096   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0807 18:36:15.004548   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:15.005100   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:15.005122   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:15.005433   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:15.005627   49618 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:36:15.008434   49618 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:15.008792   49618 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:15.008841   49618 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:15.008933   49618 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:15.009299   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:15.009335   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:15.024265   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38627
	I0807 18:36:15.024731   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:15.025164   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:15.025183   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:15.025449   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:15.025675   49618 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:36:15.025870   49618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:15.025898   49618 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:36:15.028586   49618 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:15.029036   49618 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:15.029056   49618 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:15.029199   49618 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:36:15.029366   49618 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:36:15.029545   49618 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:36:15.029677   49618 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:36:15.113105   49618 ssh_runner.go:195] Run: systemctl --version
	I0807 18:36:15.120436   49618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:15.138463   49618 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:15.138490   49618 api_server.go:166] Checking apiserver status ...
	I0807 18:36:15.138522   49618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:15.153418   49618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0807 18:36:15.164330   49618 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:15.164383   49618 ssh_runner.go:195] Run: ls
	I0807 18:36:15.169482   49618 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:15.175715   49618 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:15.175744   49618 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:36:15.175755   49618 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:15.175770   49618 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:36:15.176164   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:15.176226   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:15.191675   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0807 18:36:15.192261   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:15.192800   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:15.192819   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:15.193151   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:15.193363   49618 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:36:15.195314   49618 status.go:330] ha-198246-m02 host status = "Running" (err=<nil>)
	I0807 18:36:15.195328   49618 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:15.195652   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:15.195690   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:15.210482   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40321
	I0807 18:36:15.210902   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:15.211416   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:15.211439   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:15.211832   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:15.212029   49618 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:36:15.215279   49618 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:15.215712   49618 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:15.215740   49618 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:15.215885   49618 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:15.216253   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:15.216294   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:15.231736   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0807 18:36:15.232159   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:15.232663   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:15.232686   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:15.233078   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:15.233296   49618 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:36:15.233498   49618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:15.233518   49618 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:36:15.236256   49618 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:15.236694   49618 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:15.236717   49618 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:15.236839   49618 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:36:15.236996   49618 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:36:15.237136   49618 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:36:15.237267   49618 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	W0807 18:36:16.068434   49618 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:16.068475   49618 retry.go:31] will retry after 155.188845ms: dial tcp 192.168.39.251:22: connect: no route to host
	W0807 18:36:19.300483   49618 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	W0807 18:36:19.300573   49618 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	E0807 18:36:19.300598   49618 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:19.300608   49618 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0807 18:36:19.300641   49618 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:19.300655   49618 status.go:255] checking status of ha-198246-m03 ...
	I0807 18:36:19.301096   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:19.301155   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:19.316531   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I0807 18:36:19.317026   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:19.317468   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:19.317489   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:19.317765   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:19.317994   49618 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:36:19.319820   49618 status.go:330] ha-198246-m03 host status = "Running" (err=<nil>)
	I0807 18:36:19.319834   49618 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:19.320116   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:19.320156   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:19.334795   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32823
	I0807 18:36:19.335278   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:19.335843   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:19.335868   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:19.336244   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:19.336453   49618 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:36:19.339670   49618 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:19.340159   49618 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:19.340182   49618 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:19.340415   49618 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:19.340756   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:19.340793   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:19.358174   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
	I0807 18:36:19.358677   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:19.359222   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:19.359247   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:19.359634   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:19.359840   49618 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:36:19.360043   49618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:19.360065   49618 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:36:19.363068   49618 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:19.363454   49618 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:19.363489   49618 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:19.363755   49618 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:36:19.363955   49618 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:36:19.364137   49618 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:36:19.364301   49618 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:36:19.451878   49618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:19.466547   49618 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:19.466576   49618 api_server.go:166] Checking apiserver status ...
	I0807 18:36:19.466610   49618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:19.480792   49618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0807 18:36:19.491023   49618 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:19.491082   49618 ssh_runner.go:195] Run: ls
	I0807 18:36:19.496012   49618 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:19.500600   49618 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:19.500622   49618 status.go:422] ha-198246-m03 apiserver status = Running (err=<nil>)
	I0807 18:36:19.500630   49618 status.go:257] ha-198246-m03 status: &{Name:ha-198246-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:19.500644   49618 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:36:19.500932   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:19.500969   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:19.516672   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0807 18:36:19.517152   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:19.517645   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:19.517660   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:19.517996   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:19.518221   49618 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:36:19.519839   49618 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:36:19.519853   49618 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:19.520138   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:19.520191   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:19.534832   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41455
	I0807 18:36:19.535283   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:19.535722   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:19.535780   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:19.536085   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:19.536311   49618 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:36:19.538901   49618 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:19.539396   49618 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:19.539443   49618 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:19.539621   49618 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:19.539908   49618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:19.539943   49618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:19.555915   49618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0807 18:36:19.556507   49618 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:19.557170   49618 main.go:141] libmachine: Using API Version  1
	I0807 18:36:19.557197   49618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:19.557550   49618 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:19.557734   49618 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:36:19.557912   49618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:19.557930   49618 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:36:19.560993   49618 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:19.561471   49618 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:19.561502   49618 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:19.561734   49618 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:36:19.561907   49618 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:36:19.562065   49618 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:36:19.562227   49618 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:36:19.648653   49618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:19.663001   49618 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 3 (3.74339873s)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-198246-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:36:22.149093   49733 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:36:22.149221   49733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:22.149232   49733 out.go:304] Setting ErrFile to fd 2...
	I0807 18:36:22.149239   49733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:22.149422   49733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:36:22.149612   49733 out.go:298] Setting JSON to false
	I0807 18:36:22.149643   49733 mustload.go:65] Loading cluster: ha-198246
	I0807 18:36:22.149742   49733 notify.go:220] Checking for updates...
	I0807 18:36:22.150154   49733 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:36:22.150177   49733 status.go:255] checking status of ha-198246 ...
	I0807 18:36:22.150629   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:22.150694   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:22.169949   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
	I0807 18:36:22.170429   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:22.171010   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:22.171030   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:22.171416   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:22.171605   49733 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:36:22.173475   49733 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:36:22.173492   49733 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:22.173784   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:22.173829   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:22.190002   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36955
	I0807 18:36:22.190430   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:22.190977   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:22.191000   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:22.191307   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:22.191537   49733 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:36:22.194507   49733 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:22.194883   49733 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:22.194909   49733 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:22.195122   49733 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:22.195424   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:22.195459   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:22.210189   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
	I0807 18:36:22.210612   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:22.211132   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:22.211160   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:22.211475   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:22.211668   49733 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:36:22.211862   49733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:22.211888   49733 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:36:22.214592   49733 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:22.214927   49733 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:22.214954   49733 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:22.215085   49733 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:36:22.215290   49733 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:36:22.215436   49733 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:36:22.215567   49733 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:36:22.296960   49733 ssh_runner.go:195] Run: systemctl --version
	I0807 18:36:22.303791   49733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:22.320124   49733 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:22.320153   49733 api_server.go:166] Checking apiserver status ...
	I0807 18:36:22.320186   49733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:22.335347   49733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0807 18:36:22.345914   49733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:22.345967   49733 ssh_runner.go:195] Run: ls
	I0807 18:36:22.351674   49733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:22.356541   49733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:22.356566   49733 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:36:22.356576   49733 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:22.356604   49733 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:36:22.356980   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:22.357017   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:22.373553   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33593
	I0807 18:36:22.373993   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:22.374538   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:22.374561   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:22.374948   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:22.375181   49733 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:36:22.376821   49733 status.go:330] ha-198246-m02 host status = "Running" (err=<nil>)
	I0807 18:36:22.376836   49733 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:22.377124   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:22.377167   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:22.392953   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46305
	I0807 18:36:22.393384   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:22.393834   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:22.393855   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:22.394228   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:22.394413   49733 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:36:22.397195   49733 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:22.397612   49733 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:22.397657   49733 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:22.397816   49733 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:22.398154   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:22.398203   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:22.412721   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36439
	I0807 18:36:22.413191   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:22.413628   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:22.413644   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:22.413941   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:22.414095   49733 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:36:22.414273   49733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:22.414307   49733 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:36:22.416763   49733 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:22.417214   49733 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:22.417238   49733 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:22.417343   49733 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:36:22.417502   49733 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:36:22.417621   49733 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:36:22.417747   49733 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	W0807 18:36:25.476526   49733 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	W0807 18:36:25.476610   49733 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	E0807 18:36:25.476623   49733 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:25.476630   49733 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0807 18:36:25.476653   49733 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:25.476660   49733 status.go:255] checking status of ha-198246-m03 ...
	I0807 18:36:25.476970   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:25.477013   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:25.492726   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45123
	I0807 18:36:25.493217   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:25.493704   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:25.493729   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:25.494099   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:25.494294   49733 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:36:25.496069   49733 status.go:330] ha-198246-m03 host status = "Running" (err=<nil>)
	I0807 18:36:25.496087   49733 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:25.496467   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:25.496505   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:25.510934   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I0807 18:36:25.511310   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:25.511790   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:25.511811   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:25.512284   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:25.512551   49733 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:36:25.515528   49733 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:25.515921   49733 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:25.515955   49733 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:25.516068   49733 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:25.516410   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:25.516444   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:25.531690   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0807 18:36:25.532253   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:25.532852   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:25.532879   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:25.533244   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:25.533456   49733 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:36:25.533645   49733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:25.533671   49733 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:36:25.536839   49733 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:25.537438   49733 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:25.537462   49733 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:25.537663   49733 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:36:25.537873   49733 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:36:25.538091   49733 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:36:25.538238   49733 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:36:25.628902   49733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:25.647324   49733 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:25.647348   49733 api_server.go:166] Checking apiserver status ...
	I0807 18:36:25.647379   49733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:25.663953   49733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0807 18:36:25.677634   49733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:25.677686   49733 ssh_runner.go:195] Run: ls
	I0807 18:36:25.683026   49733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:25.687247   49733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:25.687270   49733 status.go:422] ha-198246-m03 apiserver status = Running (err=<nil>)
	I0807 18:36:25.687277   49733 status.go:257] ha-198246-m03 status: &{Name:ha-198246-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:25.687291   49733 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:36:25.687654   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:25.687689   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:25.702475   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33277
	I0807 18:36:25.702936   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:25.703407   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:25.703427   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:25.703733   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:25.703950   49733 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:36:25.705788   49733 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:36:25.705806   49733 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:25.706105   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:25.706165   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:25.722462   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0807 18:36:25.722895   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:25.723425   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:25.723453   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:25.723779   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:25.723985   49733 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:36:25.727108   49733 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:25.727526   49733 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:25.727564   49733 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:25.727730   49733 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:25.728081   49733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:25.728140   49733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:25.742853   49733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33779
	I0807 18:36:25.743357   49733 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:25.743808   49733 main.go:141] libmachine: Using API Version  1
	I0807 18:36:25.743829   49733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:25.744229   49733 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:25.744409   49733 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:36:25.744615   49733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:25.744632   49733 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:36:25.747757   49733 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:25.748307   49733 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:25.748335   49733 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:25.748493   49733 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:36:25.748671   49733 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:36:25.748801   49733 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:36:25.748983   49733 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:36:25.836067   49733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:25.851891   49733 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
E0807 18:36:31.077280   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 3 (4.685430283s)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-198246-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:36:27.616099   49833 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:36:27.616227   49833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:27.616236   49833 out.go:304] Setting ErrFile to fd 2...
	I0807 18:36:27.616241   49833 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:27.616415   49833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:36:27.616616   49833 out.go:298] Setting JSON to false
	I0807 18:36:27.616641   49833 mustload.go:65] Loading cluster: ha-198246
	I0807 18:36:27.616733   49833 notify.go:220] Checking for updates...
	I0807 18:36:27.617089   49833 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:36:27.617103   49833 status.go:255] checking status of ha-198246 ...
	I0807 18:36:27.617513   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:27.617552   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:27.637632   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0807 18:36:27.638037   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:27.638672   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:27.638700   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:27.639002   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:27.639228   49833 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:36:27.640822   49833 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:36:27.640851   49833 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:27.641125   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:27.641164   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:27.655649   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
	I0807 18:36:27.656039   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:27.656505   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:27.656536   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:27.656848   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:27.657036   49833 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:36:27.659597   49833 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:27.659980   49833 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:27.659998   49833 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:27.660194   49833 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:27.660538   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:27.660574   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:27.675388   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42023
	I0807 18:36:27.675774   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:27.676266   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:27.676288   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:27.676589   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:27.676768   49833 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:36:27.676975   49833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:27.676997   49833 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:36:27.679917   49833 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:27.680386   49833 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:27.680421   49833 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:27.680588   49833 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:36:27.680742   49833 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:36:27.680877   49833 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:36:27.681037   49833 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:36:27.764382   49833 ssh_runner.go:195] Run: systemctl --version
	I0807 18:36:27.771839   49833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:27.787360   49833 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:27.787391   49833 api_server.go:166] Checking apiserver status ...
	I0807 18:36:27.787425   49833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:27.802155   49833 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0807 18:36:27.813195   49833 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:27.813271   49833 ssh_runner.go:195] Run: ls
	I0807 18:36:27.821004   49833 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:27.825320   49833 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:27.825348   49833 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:36:27.825356   49833 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:27.825375   49833 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:36:27.825655   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:27.825714   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:27.841227   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0807 18:36:27.841687   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:27.842255   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:27.842275   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:27.842604   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:27.842813   49833 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:36:27.844336   49833 status.go:330] ha-198246-m02 host status = "Running" (err=<nil>)
	I0807 18:36:27.844354   49833 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:27.844668   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:27.844714   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:27.859648   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0807 18:36:27.860141   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:27.860648   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:27.860672   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:27.860988   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:27.861196   49833 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:36:27.864071   49833 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:27.864556   49833 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:27.864581   49833 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:27.864748   49833 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:27.865048   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:27.865100   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:27.879637   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I0807 18:36:27.880055   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:27.880586   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:27.880607   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:27.880898   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:27.881075   49833 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:36:27.881274   49833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:27.881296   49833 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:36:27.884163   49833 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:27.884640   49833 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:27.884668   49833 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:27.884752   49833 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:36:27.884932   49833 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:36:27.885089   49833 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:36:27.885223   49833 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	W0807 18:36:28.548489   49833 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:28.548542   49833 retry.go:31] will retry after 295.725043ms: dial tcp 192.168.39.251:22: connect: no route to host
	W0807 18:36:31.908488   49833 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	W0807 18:36:31.908601   49833 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	E0807 18:36:31.908624   49833 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:31.908635   49833 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0807 18:36:31.908659   49833 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:31.908669   49833 status.go:255] checking status of ha-198246-m03 ...
	I0807 18:36:31.908952   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:31.909006   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:31.923636   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34055
	I0807 18:36:31.924084   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:31.924553   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:31.924575   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:31.924874   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:31.925106   49833 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:36:31.926589   49833 status.go:330] ha-198246-m03 host status = "Running" (err=<nil>)
	I0807 18:36:31.926604   49833 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:31.926903   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:31.926950   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:31.942014   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I0807 18:36:31.942377   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:31.942820   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:31.942853   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:31.943207   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:31.943376   49833 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:36:31.946181   49833 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:31.946621   49833 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:31.946647   49833 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:31.946813   49833 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:31.947111   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:31.947155   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:31.961755   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46445
	I0807 18:36:31.962170   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:31.962656   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:31.962676   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:31.963063   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:31.963268   49833 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:36:31.963463   49833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:31.963481   49833 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:36:31.966829   49833 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:31.967279   49833 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:31.967313   49833 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:31.967476   49833 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:36:31.967699   49833 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:36:31.967833   49833 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:36:31.967960   49833 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:36:32.051951   49833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:32.067361   49833 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:32.067388   49833 api_server.go:166] Checking apiserver status ...
	I0807 18:36:32.067419   49833 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:32.080865   49833 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0807 18:36:32.090865   49833 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:32.090944   49833 ssh_runner.go:195] Run: ls
	I0807 18:36:32.095869   49833 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:32.101202   49833 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:32.101228   49833 status.go:422] ha-198246-m03 apiserver status = Running (err=<nil>)
	I0807 18:36:32.101238   49833 status.go:257] ha-198246-m03 status: &{Name:ha-198246-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:32.101256   49833 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:36:32.101596   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:32.101630   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:32.117120   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38551
	I0807 18:36:32.117685   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:32.118163   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:32.118182   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:32.118520   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:32.118727   49833 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:36:32.120398   49833 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:36:32.120414   49833 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:32.120739   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:32.120779   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:32.136451   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I0807 18:36:32.136826   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:32.137321   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:32.137346   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:32.137638   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:32.137856   49833 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:36:32.140906   49833 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:32.141382   49833 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:32.141414   49833 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:32.141571   49833 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:32.141982   49833 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:32.142025   49833 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:32.157308   49833 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I0807 18:36:32.157692   49833 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:32.158159   49833 main.go:141] libmachine: Using API Version  1
	I0807 18:36:32.158178   49833 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:32.158524   49833 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:32.158756   49833 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:36:32.158937   49833 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:32.158961   49833 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:36:32.161838   49833 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:32.162279   49833 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:32.162303   49833 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:32.162485   49833 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:36:32.162642   49833 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:36:32.162803   49833 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:36:32.162959   49833 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:36:32.247728   49833 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:32.261659   49833 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 3 (3.738830133s)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-198246-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:36:38.792572   49949 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:36:38.792842   49949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:38.792855   49949 out.go:304] Setting ErrFile to fd 2...
	I0807 18:36:38.792861   49949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:38.793070   49949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:36:38.793242   49949 out.go:298] Setting JSON to false
	I0807 18:36:38.793264   49949 mustload.go:65] Loading cluster: ha-198246
	I0807 18:36:38.793297   49949 notify.go:220] Checking for updates...
	I0807 18:36:38.793594   49949 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:36:38.793608   49949 status.go:255] checking status of ha-198246 ...
	I0807 18:36:38.793985   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:38.794036   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:38.811816   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0807 18:36:38.812350   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:38.812963   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:38.813008   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:38.813403   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:38.813627   49949 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:36:38.815303   49949 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:36:38.815320   49949 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:38.815629   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:38.815667   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:38.830117   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37553
	I0807 18:36:38.830482   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:38.830944   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:38.830964   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:38.831244   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:38.831414   49949 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:36:38.834380   49949 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:38.834818   49949 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:38.834846   49949 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:38.834983   49949 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:38.835281   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:38.835322   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:38.849699   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34323
	I0807 18:36:38.850149   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:38.850604   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:38.850624   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:38.850935   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:38.851117   49949 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:36:38.851375   49949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:38.851395   49949 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:36:38.853968   49949 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:38.854462   49949 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:38.854496   49949 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:38.854594   49949 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:36:38.854783   49949 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:36:38.854953   49949 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:36:38.855145   49949 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:36:38.940316   49949 ssh_runner.go:195] Run: systemctl --version
	I0807 18:36:38.947218   49949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:38.964729   49949 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:38.964758   49949 api_server.go:166] Checking apiserver status ...
	I0807 18:36:38.964796   49949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:38.979905   49949 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0807 18:36:38.990170   49949 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:38.990223   49949 ssh_runner.go:195] Run: ls
	I0807 18:36:38.994752   49949 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:38.999082   49949 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:38.999105   49949 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:36:38.999129   49949 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:38.999143   49949 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:36:38.999477   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:38.999510   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:39.014128   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33175
	I0807 18:36:39.014577   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:39.015016   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:39.015038   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:39.015396   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:39.015591   49949 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:36:39.017047   49949 status.go:330] ha-198246-m02 host status = "Running" (err=<nil>)
	I0807 18:36:39.017065   49949 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:39.017345   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:39.017376   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:39.031690   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41883
	I0807 18:36:39.032144   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:39.032606   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:39.032630   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:39.032924   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:39.033110   49949 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:36:39.035975   49949 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:39.036512   49949 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:39.036551   49949 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:39.036769   49949 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:36:39.037065   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:39.037103   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:39.052987   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36759
	I0807 18:36:39.053411   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:39.053897   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:39.053918   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:39.054253   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:39.054450   49949 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:36:39.054653   49949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:39.054673   49949 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:36:39.059440   49949 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:39.059935   49949 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:36:39.059965   49949 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:36:39.060163   49949 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:36:39.060381   49949 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:36:39.060549   49949 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:36:39.060689   49949 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	W0807 18:36:42.116485   49949 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.251:22: connect: no route to host
	W0807 18:36:42.116592   49949 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	E0807 18:36:42.116616   49949 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:42.116639   49949 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0807 18:36:42.116654   49949 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.251:22: connect: no route to host
	I0807 18:36:42.116665   49949 status.go:255] checking status of ha-198246-m03 ...
	I0807 18:36:42.116999   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:42.117051   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:42.132210   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46109
	I0807 18:36:42.132624   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:42.133093   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:42.133114   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:42.133411   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:42.133615   49949 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:36:42.135360   49949 status.go:330] ha-198246-m03 host status = "Running" (err=<nil>)
	I0807 18:36:42.135382   49949 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:42.135714   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:42.135768   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:42.150610   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I0807 18:36:42.151023   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:42.151462   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:42.151490   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:42.151772   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:42.151968   49949 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:36:42.155226   49949 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:42.155694   49949 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:42.155732   49949 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:42.155857   49949 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:42.156193   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:42.156250   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:42.171053   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41639
	I0807 18:36:42.171453   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:42.171872   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:42.171895   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:42.172177   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:42.172355   49949 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:36:42.172539   49949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:42.172567   49949 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:36:42.175508   49949 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:42.175877   49949 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:42.175901   49949 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:42.176036   49949 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:36:42.176174   49949 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:36:42.176341   49949 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:36:42.176480   49949 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:36:42.264821   49949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:42.282770   49949 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:42.282802   49949 api_server.go:166] Checking apiserver status ...
	I0807 18:36:42.282836   49949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:42.298124   49949 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0807 18:36:42.308822   49949 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:42.308881   49949 ssh_runner.go:195] Run: ls
	I0807 18:36:42.313604   49949 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:42.318637   49949 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:42.318662   49949 status.go:422] ha-198246-m03 apiserver status = Running (err=<nil>)
	I0807 18:36:42.318674   49949 status.go:257] ha-198246-m03 status: &{Name:ha-198246-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:42.318693   49949 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:36:42.319056   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:42.319110   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:42.334007   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38327
	I0807 18:36:42.334420   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:42.334860   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:42.334883   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:42.335172   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:42.335356   49949 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:36:42.337202   49949 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:36:42.337220   49949 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:42.337531   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:42.337583   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:42.351994   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46865
	I0807 18:36:42.352483   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:42.353175   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:42.353195   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:42.353663   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:42.353856   49949 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:36:42.356638   49949 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:42.357027   49949 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:42.357050   49949 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:42.357198   49949 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:42.357517   49949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:42.357552   49949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:42.372527   49949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0807 18:36:42.372959   49949 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:42.373457   49949 main.go:141] libmachine: Using API Version  1
	I0807 18:36:42.373478   49949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:42.373777   49949 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:42.373972   49949 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:36:42.374157   49949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:42.374175   49949 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:36:42.376998   49949 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:42.377459   49949 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:42.377486   49949 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:42.377657   49949 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:36:42.377874   49949 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:36:42.378032   49949 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:36:42.378191   49949 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:36:42.468700   49949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:42.487662   49949 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 7 (628.457484ms)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-198246-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:36:51.143769   50086 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:36:51.144024   50086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:51.144033   50086 out.go:304] Setting ErrFile to fd 2...
	I0807 18:36:51.144037   50086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:36:51.144230   50086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:36:51.144450   50086 out.go:298] Setting JSON to false
	I0807 18:36:51.144473   50086 mustload.go:65] Loading cluster: ha-198246
	I0807 18:36:51.144510   50086 notify.go:220] Checking for updates...
	I0807 18:36:51.144910   50086 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:36:51.144933   50086 status.go:255] checking status of ha-198246 ...
	I0807 18:36:51.145287   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.145345   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.162943   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0807 18:36:51.163438   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.163990   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.164017   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.164481   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.164714   50086 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:36:51.166840   50086 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:36:51.166862   50086 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:51.167305   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.167354   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.183325   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0807 18:36:51.183838   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.184323   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.184347   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.184618   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.184799   50086 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:36:51.187964   50086 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:51.188500   50086 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:51.188547   50086 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:51.188698   50086 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:36:51.189007   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.189048   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.204171   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
	I0807 18:36:51.204612   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.205138   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.205162   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.205452   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.205640   50086 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:36:51.205827   50086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:51.205853   50086 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:36:51.208463   50086 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:51.208923   50086 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:36:51.208972   50086 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:36:51.209074   50086 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:36:51.209244   50086 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:36:51.209378   50086 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:36:51.209469   50086 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:36:51.289634   50086 ssh_runner.go:195] Run: systemctl --version
	I0807 18:36:51.296563   50086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:51.313639   50086 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:51.313667   50086 api_server.go:166] Checking apiserver status ...
	I0807 18:36:51.313704   50086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:51.329273   50086 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0807 18:36:51.339383   50086 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:51.339446   50086 ssh_runner.go:195] Run: ls
	I0807 18:36:51.344515   50086 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:51.349191   50086 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:51.349219   50086 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:36:51.349232   50086 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:51.349267   50086 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:36:51.349608   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.349646   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.365759   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33009
	I0807 18:36:51.366211   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.366701   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.366725   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.367092   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.367287   50086 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:36:51.368894   50086 status.go:330] ha-198246-m02 host status = "Stopped" (err=<nil>)
	I0807 18:36:51.368911   50086 status.go:343] host is not running, skipping remaining checks
	I0807 18:36:51.368920   50086 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:51.368941   50086 status.go:255] checking status of ha-198246-m03 ...
	I0807 18:36:51.369360   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.369407   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.384440   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I0807 18:36:51.384890   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.385381   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.385403   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.385748   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.385938   50086 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:36:51.387704   50086 status.go:330] ha-198246-m03 host status = "Running" (err=<nil>)
	I0807 18:36:51.387720   50086 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:51.388009   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.388046   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.403401   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40191
	I0807 18:36:51.403868   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.404304   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.404332   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.404650   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.404815   50086 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:36:51.407692   50086 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:51.408123   50086 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:51.408148   50086 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:51.408298   50086 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:36:51.408620   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.408662   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.423689   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44767
	I0807 18:36:51.424074   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.424596   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.424620   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.424958   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.425183   50086 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:36:51.425354   50086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:51.425371   50086 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:36:51.427899   50086 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:51.428285   50086 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:36:51.428308   50086 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:36:51.428453   50086 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:36:51.428591   50086 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:36:51.428769   50086 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:36:51.428902   50086 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:36:51.515834   50086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:51.530416   50086 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:36:51.530442   50086 api_server.go:166] Checking apiserver status ...
	I0807 18:36:51.530471   50086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:36:51.543479   50086 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0807 18:36:51.552815   50086 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:36:51.552861   50086 ssh_runner.go:195] Run: ls
	I0807 18:36:51.557369   50086 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:36:51.563495   50086 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:36:51.563520   50086 status.go:422] ha-198246-m03 apiserver status = Running (err=<nil>)
	I0807 18:36:51.563530   50086 status.go:257] ha-198246-m03 status: &{Name:ha-198246-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:36:51.563557   50086 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:36:51.563873   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.563916   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.580469   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46273
	I0807 18:36:51.580901   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.581383   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.581402   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.581741   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.581935   50086 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:36:51.583426   50086 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:36:51.583441   50086 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:51.583717   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.583772   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.598904   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0807 18:36:51.599354   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.599813   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.599829   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.600160   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.600383   50086 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:36:51.603194   50086 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:51.603650   50086 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:51.603685   50086 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:51.603848   50086 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:36:51.604175   50086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:36:51.604251   50086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:36:51.619009   50086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35789
	I0807 18:36:51.619407   50086 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:36:51.619810   50086 main.go:141] libmachine: Using API Version  1
	I0807 18:36:51.619831   50086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:36:51.620140   50086 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:36:51.620374   50086 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:36:51.620578   50086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:36:51.620600   50086 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:36:51.623069   50086 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:51.623425   50086 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:36:51.623445   50086 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:36:51.623586   50086 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:36:51.623732   50086 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:36:51.623899   50086 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:36:51.624037   50086 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:36:51.712167   50086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:36:51.729760   50086 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0807 18:36:58.764738   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 7 (624.097759ms)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-198246-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:37:05.440845   50206 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:37:05.440966   50206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:37:05.440974   50206 out.go:304] Setting ErrFile to fd 2...
	I0807 18:37:05.440979   50206 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:37:05.441172   50206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:37:05.441331   50206 out.go:298] Setting JSON to false
	I0807 18:37:05.441351   50206 mustload.go:65] Loading cluster: ha-198246
	I0807 18:37:05.441440   50206 notify.go:220] Checking for updates...
	I0807 18:37:05.441692   50206 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:37:05.441705   50206 status.go:255] checking status of ha-198246 ...
	I0807 18:37:05.442092   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.442143   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.458699   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I0807 18:37:05.459131   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.459698   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.459725   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.460052   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.460246   50206 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:37:05.461897   50206 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:37:05.461913   50206 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:37:05.462309   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.462354   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.478065   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I0807 18:37:05.478539   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.479012   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.479041   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.479326   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.479483   50206 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:37:05.482790   50206 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:37:05.483233   50206 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:37:05.483253   50206 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:37:05.483397   50206 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:37:05.483702   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.483740   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.498932   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I0807 18:37:05.499281   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.499719   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.499744   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.500062   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.500264   50206 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:37:05.500478   50206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:37:05.500507   50206 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:37:05.503378   50206 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:37:05.503793   50206 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:37:05.503823   50206 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:37:05.503937   50206 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:37:05.504118   50206 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:37:05.504292   50206 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:37:05.504471   50206 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:37:05.584479   50206 ssh_runner.go:195] Run: systemctl --version
	I0807 18:37:05.590864   50206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:37:05.606000   50206 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:37:05.606024   50206 api_server.go:166] Checking apiserver status ...
	I0807 18:37:05.606052   50206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:37:05.620240   50206 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0807 18:37:05.629711   50206 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:37:05.629776   50206 ssh_runner.go:195] Run: ls
	I0807 18:37:05.634306   50206 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:37:05.638410   50206 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:37:05.638431   50206 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:37:05.638441   50206 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:37:05.638455   50206 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:37:05.638767   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.638799   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.654720   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43631
	I0807 18:37:05.655157   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.655595   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.655614   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.655905   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.656084   50206 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:37:05.657602   50206 status.go:330] ha-198246-m02 host status = "Stopped" (err=<nil>)
	I0807 18:37:05.657616   50206 status.go:343] host is not running, skipping remaining checks
	I0807 18:37:05.657624   50206 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:37:05.657642   50206 status.go:255] checking status of ha-198246-m03 ...
	I0807 18:37:05.658041   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.658088   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.672652   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35199
	I0807 18:37:05.673088   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.673630   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.673655   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.674007   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.674206   50206 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:37:05.675971   50206 status.go:330] ha-198246-m03 host status = "Running" (err=<nil>)
	I0807 18:37:05.675985   50206 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:37:05.676399   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.676454   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.691360   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37743
	I0807 18:37:05.691717   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.692345   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.692368   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.692659   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.692846   50206 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:37:05.695739   50206 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:37:05.696157   50206 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:37:05.696180   50206 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:37:05.696315   50206 host.go:66] Checking if "ha-198246-m03" exists ...
	I0807 18:37:05.696604   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.696636   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.710759   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I0807 18:37:05.711165   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.711597   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.711630   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.711957   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.712180   50206 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:37:05.712380   50206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:37:05.712399   50206 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:37:05.714915   50206 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:37:05.715302   50206 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:37:05.715328   50206 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:37:05.715471   50206 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:37:05.715647   50206 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:37:05.715783   50206 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:37:05.715946   50206 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:37:05.804350   50206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:37:05.823794   50206 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:37:05.823824   50206 api_server.go:166] Checking apiserver status ...
	I0807 18:37:05.823859   50206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:37:05.837518   50206 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup
	W0807 18:37:05.848842   50206 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1569/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:37:05.848902   50206 ssh_runner.go:195] Run: ls
	I0807 18:37:05.856195   50206 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:37:05.860671   50206 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:37:05.860695   50206 status.go:422] ha-198246-m03 apiserver status = Running (err=<nil>)
	I0807 18:37:05.860702   50206 status.go:257] ha-198246-m03 status: &{Name:ha-198246-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:37:05.860716   50206 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:37:05.861070   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.861104   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.875842   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36891
	I0807 18:37:05.876294   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.876740   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.876762   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.877078   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.877262   50206 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:37:05.879084   50206 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:37:05.879101   50206 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:37:05.879489   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.879534   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.897183   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38103
	I0807 18:37:05.897638   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.898081   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.898100   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.898391   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.898574   50206 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:37:05.901398   50206 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:37:05.901853   50206 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:37:05.901881   50206 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:37:05.902054   50206 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:37:05.902459   50206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:05.902503   50206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:05.918630   50206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41577
	I0807 18:37:05.918994   50206 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:05.919486   50206 main.go:141] libmachine: Using API Version  1
	I0807 18:37:05.919504   50206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:05.919821   50206 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:05.920043   50206 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:37:05.920194   50206 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:37:05.920235   50206 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:37:05.922701   50206 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:37:05.923107   50206 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:37:05.923127   50206 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:37:05.923238   50206 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:37:05.923406   50206 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:37:05.923568   50206 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:37:05.923707   50206 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:37:06.007803   50206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:37:06.021981   50206 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198246 -n ha-198246
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-198246 logs -n 25: (1.447934583s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246:/home/docker/cp-test_ha-198246-m03_ha-198246.txt                       |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246 sudo cat                                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246.txt                                 |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m02:/home/docker/cp-test_ha-198246-m03_ha-198246-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m02 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04:/home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m04 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp testdata/cp-test.txt                                                | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4028937378/001/cp-test_ha-198246-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246:/home/docker/cp-test_ha-198246-m04_ha-198246.txt                       |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246 sudo cat                                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246.txt                                 |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m02:/home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m02 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03:/home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m03 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-198246 node stop m02 -v=7                                                     | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-198246 node start m02 -v=7                                                    | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:36 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:27:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:27:21.721727   44266 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:27:21.721967   44266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:27:21.721975   44266 out.go:304] Setting ErrFile to fd 2...
	I0807 18:27:21.721979   44266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:27:21.722152   44266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:27:21.722687   44266 out.go:298] Setting JSON to false
	I0807 18:27:21.723512   44266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7788,"bootTime":1723047454,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 18:27:21.723565   44266 start.go:139] virtualization: kvm guest
	I0807 18:27:21.725729   44266 out.go:177] * [ha-198246] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 18:27:21.727183   44266 notify.go:220] Checking for updates...
	I0807 18:27:21.727193   44266 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:27:21.728548   44266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:27:21.729974   44266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:27:21.731326   44266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:27:21.732576   44266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 18:27:21.733798   44266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:27:21.735342   44266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:27:21.769737   44266 out.go:177] * Using the kvm2 driver based on user configuration
	I0807 18:27:21.771127   44266 start.go:297] selected driver: kvm2
	I0807 18:27:21.771144   44266 start.go:901] validating driver "kvm2" against <nil>
	I0807 18:27:21.771156   44266 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:27:21.771870   44266 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:27:21.771942   44266 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 18:27:21.786733   44266 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 18:27:21.786777   44266 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 18:27:21.786970   44266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:27:21.787023   44266 cni.go:84] Creating CNI manager for ""
	I0807 18:27:21.787034   44266 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0807 18:27:21.787041   44266 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0807 18:27:21.787097   44266 start.go:340] cluster config:
	{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0807 18:27:21.787200   44266 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:27:21.789527   44266 out.go:177] * Starting "ha-198246" primary control-plane node in "ha-198246" cluster
	I0807 18:27:21.790581   44266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:27:21.790607   44266 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 18:27:21.790615   44266 cache.go:56] Caching tarball of preloaded images
	I0807 18:27:21.790695   44266 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 18:27:21.790708   44266 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 18:27:21.790995   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:27:21.791015   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json: {Name:mk9ea4fdb45a0ad19fddd77d9e86e860b1888943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:21.791157   44266 start.go:360] acquireMachinesLock for ha-198246: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:27:21.791197   44266 start.go:364] duration metric: took 17.005µs to acquireMachinesLock for "ha-198246"
	I0807 18:27:21.791219   44266 start.go:93] Provisioning new machine with config: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:27:21.791271   44266 start.go:125] createHost starting for "" (driver="kvm2")
	I0807 18:27:21.792742   44266 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:27:21.792862   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:27:21.792923   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:27:21.806899   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38255
	I0807 18:27:21.807336   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:27:21.807888   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:27:21.807907   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:27:21.808260   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:27:21.808450   44266 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:27:21.808588   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:21.808718   44266 start.go:159] libmachine.API.Create for "ha-198246" (driver="kvm2")
	I0807 18:27:21.808749   44266 client.go:168] LocalClient.Create starting
	I0807 18:27:21.808783   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem
	I0807 18:27:21.808815   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:27:21.808831   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:27:21.808893   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem
	I0807 18:27:21.808911   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:27:21.808924   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:27:21.808938   44266 main.go:141] libmachine: Running pre-create checks...
	I0807 18:27:21.808951   44266 main.go:141] libmachine: (ha-198246) Calling .PreCreateCheck
	I0807 18:27:21.809303   44266 main.go:141] libmachine: (ha-198246) Calling .GetConfigRaw
	I0807 18:27:21.809632   44266 main.go:141] libmachine: Creating machine...
	I0807 18:27:21.809644   44266 main.go:141] libmachine: (ha-198246) Calling .Create
	I0807 18:27:21.809775   44266 main.go:141] libmachine: (ha-198246) Creating KVM machine...
	I0807 18:27:21.810961   44266 main.go:141] libmachine: (ha-198246) DBG | found existing default KVM network
	I0807 18:27:21.811595   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:21.811462   44289 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0807 18:27:21.811613   44266 main.go:141] libmachine: (ha-198246) DBG | created network xml: 
	I0807 18:27:21.811622   44266 main.go:141] libmachine: (ha-198246) DBG | <network>
	I0807 18:27:21.811630   44266 main.go:141] libmachine: (ha-198246) DBG |   <name>mk-ha-198246</name>
	I0807 18:27:21.811643   44266 main.go:141] libmachine: (ha-198246) DBG |   <dns enable='no'/>
	I0807 18:27:21.811649   44266 main.go:141] libmachine: (ha-198246) DBG |   
	I0807 18:27:21.811659   44266 main.go:141] libmachine: (ha-198246) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0807 18:27:21.811669   44266 main.go:141] libmachine: (ha-198246) DBG |     <dhcp>
	I0807 18:27:21.811682   44266 main.go:141] libmachine: (ha-198246) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0807 18:27:21.811690   44266 main.go:141] libmachine: (ha-198246) DBG |     </dhcp>
	I0807 18:27:21.811695   44266 main.go:141] libmachine: (ha-198246) DBG |   </ip>
	I0807 18:27:21.811700   44266 main.go:141] libmachine: (ha-198246) DBG |   
	I0807 18:27:21.811723   44266 main.go:141] libmachine: (ha-198246) DBG | </network>
	I0807 18:27:21.811744   44266 main.go:141] libmachine: (ha-198246) DBG | 
	I0807 18:27:21.816727   44266 main.go:141] libmachine: (ha-198246) DBG | trying to create private KVM network mk-ha-198246 192.168.39.0/24...
	I0807 18:27:21.878767   44266 main.go:141] libmachine: (ha-198246) Setting up store path in /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246 ...
	I0807 18:27:21.878803   44266 main.go:141] libmachine: (ha-198246) Building disk image from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 18:27:21.878813   44266 main.go:141] libmachine: (ha-198246) DBG | private KVM network mk-ha-198246 192.168.39.0/24 created
	I0807 18:27:21.878832   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:21.878720   44289 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:27:21.878874   44266 main.go:141] libmachine: (ha-198246) Downloading /home/jenkins/minikube-integration/19389-20864/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:27:22.116138   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:22.116028   44289 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa...
	I0807 18:27:22.201603   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:22.201499   44289 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/ha-198246.rawdisk...
	I0807 18:27:22.201635   44266 main.go:141] libmachine: (ha-198246) DBG | Writing magic tar header
	I0807 18:27:22.201649   44266 main.go:141] libmachine: (ha-198246) DBG | Writing SSH key tar header
	I0807 18:27:22.201665   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:22.201611   44289 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246 ...
	I0807 18:27:22.201729   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246
	I0807 18:27:22.201753   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246 (perms=drwx------)
	I0807 18:27:22.201760   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines
	I0807 18:27:22.201769   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:27:22.201775   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864
	I0807 18:27:22.201784   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0807 18:27:22.201812   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home/jenkins
	I0807 18:27:22.201832   44266 main.go:141] libmachine: (ha-198246) DBG | Checking permissions on dir: /home
	I0807 18:27:22.201841   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines (perms=drwxr-xr-x)
	I0807 18:27:22.201848   44266 main.go:141] libmachine: (ha-198246) DBG | Skipping /home - not owner
	I0807 18:27:22.201885   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube (perms=drwxr-xr-x)
	I0807 18:27:22.201909   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864 (perms=drwxrwxr-x)
	I0807 18:27:22.201940   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0807 18:27:22.201958   44266 main.go:141] libmachine: (ha-198246) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0807 18:27:22.201972   44266 main.go:141] libmachine: (ha-198246) Creating domain...
	I0807 18:27:22.202738   44266 main.go:141] libmachine: (ha-198246) define libvirt domain using xml: 
	I0807 18:27:22.202751   44266 main.go:141] libmachine: (ha-198246) <domain type='kvm'>
	I0807 18:27:22.202774   44266 main.go:141] libmachine: (ha-198246)   <name>ha-198246</name>
	I0807 18:27:22.202794   44266 main.go:141] libmachine: (ha-198246)   <memory unit='MiB'>2200</memory>
	I0807 18:27:22.202803   44266 main.go:141] libmachine: (ha-198246)   <vcpu>2</vcpu>
	I0807 18:27:22.202808   44266 main.go:141] libmachine: (ha-198246)   <features>
	I0807 18:27:22.202813   44266 main.go:141] libmachine: (ha-198246)     <acpi/>
	I0807 18:27:22.202817   44266 main.go:141] libmachine: (ha-198246)     <apic/>
	I0807 18:27:22.202822   44266 main.go:141] libmachine: (ha-198246)     <pae/>
	I0807 18:27:22.202827   44266 main.go:141] libmachine: (ha-198246)     
	I0807 18:27:22.202831   44266 main.go:141] libmachine: (ha-198246)   </features>
	I0807 18:27:22.202835   44266 main.go:141] libmachine: (ha-198246)   <cpu mode='host-passthrough'>
	I0807 18:27:22.202840   44266 main.go:141] libmachine: (ha-198246)   
	I0807 18:27:22.202844   44266 main.go:141] libmachine: (ha-198246)   </cpu>
	I0807 18:27:22.202848   44266 main.go:141] libmachine: (ha-198246)   <os>
	I0807 18:27:22.202852   44266 main.go:141] libmachine: (ha-198246)     <type>hvm</type>
	I0807 18:27:22.202857   44266 main.go:141] libmachine: (ha-198246)     <boot dev='cdrom'/>
	I0807 18:27:22.202864   44266 main.go:141] libmachine: (ha-198246)     <boot dev='hd'/>
	I0807 18:27:22.202878   44266 main.go:141] libmachine: (ha-198246)     <bootmenu enable='no'/>
	I0807 18:27:22.202882   44266 main.go:141] libmachine: (ha-198246)   </os>
	I0807 18:27:22.202898   44266 main.go:141] libmachine: (ha-198246)   <devices>
	I0807 18:27:22.202906   44266 main.go:141] libmachine: (ha-198246)     <disk type='file' device='cdrom'>
	I0807 18:27:22.202934   44266 main.go:141] libmachine: (ha-198246)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/boot2docker.iso'/>
	I0807 18:27:22.202951   44266 main.go:141] libmachine: (ha-198246)       <target dev='hdc' bus='scsi'/>
	I0807 18:27:22.202961   44266 main.go:141] libmachine: (ha-198246)       <readonly/>
	I0807 18:27:22.202972   44266 main.go:141] libmachine: (ha-198246)     </disk>
	I0807 18:27:22.202982   44266 main.go:141] libmachine: (ha-198246)     <disk type='file' device='disk'>
	I0807 18:27:22.202994   44266 main.go:141] libmachine: (ha-198246)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0807 18:27:22.203006   44266 main.go:141] libmachine: (ha-198246)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/ha-198246.rawdisk'/>
	I0807 18:27:22.203017   44266 main.go:141] libmachine: (ha-198246)       <target dev='hda' bus='virtio'/>
	I0807 18:27:22.203025   44266 main.go:141] libmachine: (ha-198246)     </disk>
	I0807 18:27:22.203039   44266 main.go:141] libmachine: (ha-198246)     <interface type='network'>
	I0807 18:27:22.203047   44266 main.go:141] libmachine: (ha-198246)       <source network='mk-ha-198246'/>
	I0807 18:27:22.203055   44266 main.go:141] libmachine: (ha-198246)       <model type='virtio'/>
	I0807 18:27:22.203066   44266 main.go:141] libmachine: (ha-198246)     </interface>
	I0807 18:27:22.203074   44266 main.go:141] libmachine: (ha-198246)     <interface type='network'>
	I0807 18:27:22.203086   44266 main.go:141] libmachine: (ha-198246)       <source network='default'/>
	I0807 18:27:22.203094   44266 main.go:141] libmachine: (ha-198246)       <model type='virtio'/>
	I0807 18:27:22.203103   44266 main.go:141] libmachine: (ha-198246)     </interface>
	I0807 18:27:22.203110   44266 main.go:141] libmachine: (ha-198246)     <serial type='pty'>
	I0807 18:27:22.203121   44266 main.go:141] libmachine: (ha-198246)       <target port='0'/>
	I0807 18:27:22.203127   44266 main.go:141] libmachine: (ha-198246)     </serial>
	I0807 18:27:22.203154   44266 main.go:141] libmachine: (ha-198246)     <console type='pty'>
	I0807 18:27:22.203178   44266 main.go:141] libmachine: (ha-198246)       <target type='serial' port='0'/>
	I0807 18:27:22.203188   44266 main.go:141] libmachine: (ha-198246)     </console>
	I0807 18:27:22.203200   44266 main.go:141] libmachine: (ha-198246)     <rng model='virtio'>
	I0807 18:27:22.203214   44266 main.go:141] libmachine: (ha-198246)       <backend model='random'>/dev/random</backend>
	I0807 18:27:22.203229   44266 main.go:141] libmachine: (ha-198246)     </rng>
	I0807 18:27:22.203239   44266 main.go:141] libmachine: (ha-198246)     
	I0807 18:27:22.203243   44266 main.go:141] libmachine: (ha-198246)     
	I0807 18:27:22.203251   44266 main.go:141] libmachine: (ha-198246)   </devices>
	I0807 18:27:22.203258   44266 main.go:141] libmachine: (ha-198246) </domain>
	I0807 18:27:22.203271   44266 main.go:141] libmachine: (ha-198246) 
	I0807 18:27:22.207680   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:90:2f:e2 in network default
	I0807 18:27:22.208187   44266 main.go:141] libmachine: (ha-198246) Ensuring networks are active...
	I0807 18:27:22.208224   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:22.208779   44266 main.go:141] libmachine: (ha-198246) Ensuring network default is active
	I0807 18:27:22.209008   44266 main.go:141] libmachine: (ha-198246) Ensuring network mk-ha-198246 is active
	I0807 18:27:22.209409   44266 main.go:141] libmachine: (ha-198246) Getting domain xml...
	I0807 18:27:22.209962   44266 main.go:141] libmachine: (ha-198246) Creating domain...
	I0807 18:27:23.404405   44266 main.go:141] libmachine: (ha-198246) Waiting to get IP...
	I0807 18:27:23.405206   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:23.405600   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:23.405641   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:23.405567   44289 retry.go:31] will retry after 306.958712ms: waiting for machine to come up
	I0807 18:27:23.713982   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:23.714499   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:23.714526   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:23.714462   44289 retry.go:31] will retry after 299.119708ms: waiting for machine to come up
	I0807 18:27:24.014947   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:24.015426   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:24.015446   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:24.015393   44289 retry.go:31] will retry after 384.564278ms: waiting for machine to come up
	I0807 18:27:24.402079   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:24.402483   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:24.402507   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:24.402453   44289 retry.go:31] will retry after 547.85343ms: waiting for machine to come up
	I0807 18:27:24.952336   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:24.952783   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:24.952809   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:24.952724   44289 retry.go:31] will retry after 591.886125ms: waiting for machine to come up
	I0807 18:27:25.546536   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:25.546960   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:25.546987   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:25.546919   44289 retry.go:31] will retry after 637.639818ms: waiting for machine to come up
	I0807 18:27:26.185754   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:26.186206   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:26.186253   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:26.186171   44289 retry.go:31] will retry after 1.07415852s: waiting for machine to come up
	I0807 18:27:27.261894   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:27.262328   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:27.262357   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:27.262273   44289 retry.go:31] will retry after 1.388616006s: waiting for machine to come up
	I0807 18:27:28.652877   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:28.653287   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:28.653318   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:28.653222   44289 retry.go:31] will retry after 1.163215795s: waiting for machine to come up
	I0807 18:27:29.818449   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:29.818914   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:29.818948   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:29.818858   44289 retry.go:31] will retry after 2.029996828s: waiting for machine to come up
	I0807 18:27:31.849800   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:31.850166   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:31.850195   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:31.850108   44289 retry.go:31] will retry after 1.806326332s: waiting for machine to come up
	I0807 18:27:33.659132   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:33.659739   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:33.659768   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:33.659685   44289 retry.go:31] will retry after 3.239044606s: waiting for machine to come up
	I0807 18:27:36.900422   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:36.900792   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:36.900819   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:36.900742   44289 retry.go:31] will retry after 3.037723315s: waiting for machine to come up
	I0807 18:27:39.941930   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:39.942412   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find current IP address of domain ha-198246 in network mk-ha-198246
	I0807 18:27:39.942441   44266 main.go:141] libmachine: (ha-198246) DBG | I0807 18:27:39.942337   44289 retry.go:31] will retry after 5.1268659s: waiting for machine to come up
	I0807 18:27:45.074427   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:45.074880   44266 main.go:141] libmachine: (ha-198246) Found IP for machine: 192.168.39.196
	I0807 18:27:45.074901   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has current primary IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:45.074909   44266 main.go:141] libmachine: (ha-198246) Reserving static IP address...
	I0807 18:27:45.075244   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find host DHCP lease matching {name: "ha-198246", mac: "52:54:00:b0:88:98", ip: "192.168.39.196"} in network mk-ha-198246
	I0807 18:27:45.145259   44266 main.go:141] libmachine: (ha-198246) DBG | Getting to WaitForSSH function...
	I0807 18:27:45.145285   44266 main.go:141] libmachine: (ha-198246) Reserved static IP address: 192.168.39.196
	I0807 18:27:45.145343   44266 main.go:141] libmachine: (ha-198246) Waiting for SSH to be available...
	I0807 18:27:45.147843   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:45.148233   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246
	I0807 18:27:45.148255   44266 main.go:141] libmachine: (ha-198246) DBG | unable to find defined IP address of network mk-ha-198246 interface with MAC address 52:54:00:b0:88:98
	I0807 18:27:45.148474   44266 main.go:141] libmachine: (ha-198246) DBG | Using SSH client type: external
	I0807 18:27:45.148506   44266 main.go:141] libmachine: (ha-198246) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa (-rw-------)
	I0807 18:27:45.148558   44266 main.go:141] libmachine: (ha-198246) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:27:45.148578   44266 main.go:141] libmachine: (ha-198246) DBG | About to run SSH command:
	I0807 18:27:45.148592   44266 main.go:141] libmachine: (ha-198246) DBG | exit 0
	I0807 18:27:45.152274   44266 main.go:141] libmachine: (ha-198246) DBG | SSH cmd err, output: exit status 255: 
	I0807 18:27:45.152292   44266 main.go:141] libmachine: (ha-198246) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0807 18:27:45.152299   44266 main.go:141] libmachine: (ha-198246) DBG | command : exit 0
	I0807 18:27:45.152304   44266 main.go:141] libmachine: (ha-198246) DBG | err     : exit status 255
	I0807 18:27:45.152312   44266 main.go:141] libmachine: (ha-198246) DBG | output  : 
	I0807 18:27:48.153047   44266 main.go:141] libmachine: (ha-198246) DBG | Getting to WaitForSSH function...
	I0807 18:27:48.155522   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.155912   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.155936   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.156105   44266 main.go:141] libmachine: (ha-198246) DBG | Using SSH client type: external
	I0807 18:27:48.156130   44266 main.go:141] libmachine: (ha-198246) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa (-rw-------)
	I0807 18:27:48.156167   44266 main.go:141] libmachine: (ha-198246) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.196 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:27:48.156191   44266 main.go:141] libmachine: (ha-198246) DBG | About to run SSH command:
	I0807 18:27:48.156225   44266 main.go:141] libmachine: (ha-198246) DBG | exit 0
	I0807 18:27:48.280381   44266 main.go:141] libmachine: (ha-198246) DBG | SSH cmd err, output: <nil>: 
	I0807 18:27:48.280692   44266 main.go:141] libmachine: (ha-198246) KVM machine creation complete!
	I0807 18:27:48.281058   44266 main.go:141] libmachine: (ha-198246) Calling .GetConfigRaw
	I0807 18:27:48.281656   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:48.281875   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:48.282036   44266 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 18:27:48.282050   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:27:48.283345   44266 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 18:27:48.283363   44266 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 18:27:48.283372   44266 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 18:27:48.283379   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.286023   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.286450   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.286469   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.286618   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.286773   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.286910   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.287021   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.287206   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:48.287379   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:48.287389   44266 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 18:27:48.387621   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:27:48.387645   44266 main.go:141] libmachine: Detecting the provisioner...
	I0807 18:27:48.387655   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.390612   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.391010   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.391041   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.391226   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.391498   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.391674   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.391801   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.392003   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:48.392181   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:48.392195   44266 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 18:27:48.492889   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 18:27:48.492963   44266 main.go:141] libmachine: found compatible host: buildroot
	I0807 18:27:48.492969   44266 main.go:141] libmachine: Provisioning with buildroot...
	I0807 18:27:48.492976   44266 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:27:48.493236   44266 buildroot.go:166] provisioning hostname "ha-198246"
	I0807 18:27:48.493263   44266 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:27:48.493468   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.496265   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.496578   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.496602   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.496742   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.496924   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.497076   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.497274   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.497500   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:48.497677   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:48.497689   44266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198246 && echo "ha-198246" | sudo tee /etc/hostname
	I0807 18:27:48.615801   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246
	
	I0807 18:27:48.615855   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.618925   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.619286   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.619315   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.619478   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.619662   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.619808   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.619965   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.620141   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:48.620341   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:48.620359   44266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198246/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:27:48.729682   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:27:48.729740   44266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 18:27:48.729770   44266 buildroot.go:174] setting up certificates
	I0807 18:27:48.729789   44266 provision.go:84] configureAuth start
	I0807 18:27:48.729808   44266 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:27:48.730094   44266 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:27:48.732947   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.733289   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.733317   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.733475   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.735604   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.735911   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.735935   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.736091   44266 provision.go:143] copyHostCerts
	I0807 18:27:48.736118   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:27:48.736160   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 18:27:48.736174   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:27:48.736261   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 18:27:48.736361   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:27:48.736380   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 18:27:48.736386   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:27:48.736428   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 18:27:48.736530   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:27:48.736553   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 18:27:48.736560   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:27:48.736583   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 18:27:48.736657   44266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.ha-198246 san=[127.0.0.1 192.168.39.196 ha-198246 localhost minikube]
	I0807 18:27:48.961157   44266 provision.go:177] copyRemoteCerts
	I0807 18:27:48.961215   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:27:48.961238   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:48.964265   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.964661   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:48.964697   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:48.964961   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:48.965206   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:48.965427   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:48.965581   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:27:49.047016   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 18:27:49.047096   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:27:49.071078   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 18:27:49.071152   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0807 18:27:49.095496   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 18:27:49.095566   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 18:27:49.120006   44266 provision.go:87] duration metric: took 390.201413ms to configureAuth
	I0807 18:27:49.120032   44266 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:27:49.120250   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:27:49.120330   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.122781   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.123123   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.123148   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.123319   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.123504   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.123653   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.123754   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.123923   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:49.124077   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:49.124093   44266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 18:27:49.379427   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 18:27:49.379449   44266 main.go:141] libmachine: Checking connection to Docker...
	I0807 18:27:49.379457   44266 main.go:141] libmachine: (ha-198246) Calling .GetURL
	I0807 18:27:49.381160   44266 main.go:141] libmachine: (ha-198246) DBG | Using libvirt version 6000000
	I0807 18:27:49.383505   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.383829   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.383861   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.384032   44266 main.go:141] libmachine: Docker is up and running!
	I0807 18:27:49.384051   44266 main.go:141] libmachine: Reticulating splines...
	I0807 18:27:49.384060   44266 client.go:171] duration metric: took 27.57529956s to LocalClient.Create
	I0807 18:27:49.384091   44266 start.go:167] duration metric: took 27.575373855s to libmachine.API.Create "ha-198246"
	I0807 18:27:49.384103   44266 start.go:293] postStartSetup for "ha-198246" (driver="kvm2")
	I0807 18:27:49.384117   44266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:27:49.384137   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.384384   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:27:49.384406   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.387011   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.387377   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.387400   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.387601   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.387778   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.387917   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.388019   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:27:49.467416   44266 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:27:49.471819   44266 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:27:49.471844   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 18:27:49.471913   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 18:27:49.471996   44266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 18:27:49.472007   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 18:27:49.472100   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:27:49.482472   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:27:49.507293   44266 start.go:296] duration metric: took 123.178167ms for postStartSetup
	I0807 18:27:49.507345   44266 main.go:141] libmachine: (ha-198246) Calling .GetConfigRaw
	I0807 18:27:49.507928   44266 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:27:49.510575   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.511008   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.511039   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.511346   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:27:49.511529   44266 start.go:128] duration metric: took 27.720249653s to createHost
	I0807 18:27:49.511551   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.513835   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.514239   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.514268   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.514412   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.514597   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.514751   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.514864   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.515031   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:27:49.515233   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:27:49.515246   44266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:27:49.621198   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055269.601266320
	
	I0807 18:27:49.621228   44266 fix.go:216] guest clock: 1723055269.601266320
	I0807 18:27:49.621239   44266 fix.go:229] Guest: 2024-08-07 18:27:49.60126632 +0000 UTC Remote: 2024-08-07 18:27:49.511541014 +0000 UTC m=+27.822561678 (delta=89.725306ms)
	I0807 18:27:49.621348   44266 fix.go:200] guest clock delta is within tolerance: 89.725306ms
	I0807 18:27:49.621358   44266 start.go:83] releasing machines lock for "ha-198246", held for 27.830148378s
	I0807 18:27:49.621384   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.621648   44266 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:27:49.624076   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.624475   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.624506   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.624646   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.625094   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.625251   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:27:49.625329   44266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 18:27:49.625368   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.625433   44266 ssh_runner.go:195] Run: cat /version.json
	I0807 18:27:49.625456   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:27:49.628179   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.628428   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.628489   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.628513   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.628653   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.628845   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.628875   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:49.628902   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:49.628989   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.629163   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:27:49.629174   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:27:49.629309   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:27:49.629489   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:27:49.629653   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:27:49.707121   44266 ssh_runner.go:195] Run: systemctl --version
	I0807 18:27:49.730199   44266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 18:27:49.894353   44266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:27:49.901460   44266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:27:49.901532   44266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:27:49.918470   44266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:27:49.918496   44266 start.go:495] detecting cgroup driver to use...
	I0807 18:27:49.918550   44266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:27:49.935346   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:27:49.950325   44266 docker.go:217] disabling cri-docker service (if available) ...
	I0807 18:27:49.950373   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 18:27:49.965026   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 18:27:49.979393   44266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 18:27:50.101391   44266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 18:27:50.266571   44266 docker.go:233] disabling docker service ...
	I0807 18:27:50.266633   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 18:27:50.280886   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 18:27:50.293687   44266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 18:27:50.411890   44266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 18:27:50.531647   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 18:27:50.545917   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:27:50.565503   44266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 18:27:50.565564   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.577648   44266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 18:27:50.577727   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.589717   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.601142   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.612276   44266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:27:50.623423   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.634380   44266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.652648   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:27:50.664994   44266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:27:50.675990   44266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 18:27:50.676071   44266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 18:27:50.690790   44266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:27:50.702376   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:27:50.836087   44266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 18:27:50.977071   44266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 18:27:50.977144   44266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 18:27:50.982362   44266 start.go:563] Will wait 60s for crictl version
	I0807 18:27:50.982434   44266 ssh_runner.go:195] Run: which crictl
	I0807 18:27:50.986273   44266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:27:51.023888   44266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 18:27:51.023993   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:27:51.051884   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:27:51.082665   44266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 18:27:51.083804   44266 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:27:51.086499   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:51.086829   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:27:51.086855   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:27:51.087080   44266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 18:27:51.091372   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:27:51.104446   44266 kubeadm.go:883] updating cluster {Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 18:27:51.104537   44266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:27:51.104583   44266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:27:51.135506   44266 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0807 18:27:51.135568   44266 ssh_runner.go:195] Run: which lz4
	I0807 18:27:51.140129   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0807 18:27:51.140252   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0807 18:27:51.144801   44266 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 18:27:51.144833   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0807 18:27:52.554895   44266 crio.go:462] duration metric: took 1.414692613s to copy over tarball
	I0807 18:27:52.555019   44266 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 18:27:54.702005   44266 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.146953106s)
	I0807 18:27:54.702032   44266 crio.go:469] duration metric: took 2.147109225s to extract the tarball
	I0807 18:27:54.702041   44266 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 18:27:54.740000   44266 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:27:54.786797   44266 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 18:27:54.786816   44266 cache_images.go:84] Images are preloaded, skipping loading
	I0807 18:27:54.786825   44266 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0807 18:27:54.786956   44266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:27:54.787033   44266 ssh_runner.go:195] Run: crio config
	I0807 18:27:54.830632   44266 cni.go:84] Creating CNI manager for ""
	I0807 18:27:54.830659   44266 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 18:27:54.830671   44266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 18:27:54.830691   44266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198246 NodeName:ha-198246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 18:27:54.830808   44266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-198246"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 18:27:54.830828   44266 kube-vip.go:115] generating kube-vip config ...
	I0807 18:27:54.830867   44266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:27:54.849054   44266 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:27:54.849165   44266 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:27:54.849229   44266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:27:54.859040   44266 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 18:27:54.859110   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0807 18:27:54.868475   44266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0807 18:27:54.885744   44266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:27:54.902712   44266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0807 18:27:54.919755   44266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0807 18:27:54.936740   44266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:27:54.940938   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:27:54.953525   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:27:55.078749   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:27:55.097378   44266 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246 for IP: 192.168.39.196
	I0807 18:27:55.097404   44266 certs.go:194] generating shared ca certs ...
	I0807 18:27:55.097422   44266 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.097635   44266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 18:27:55.097699   44266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 18:27:55.097714   44266 certs.go:256] generating profile certs ...
	I0807 18:27:55.097787   44266 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key
	I0807 18:27:55.097814   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt with IP's: []
	I0807 18:27:55.208693   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt ...
	I0807 18:27:55.208724   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt: {Name:mka7fa8cfb74ff61110b7cfa5be9a6c01adb62d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.208915   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key ...
	I0807 18:27:55.208929   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key: {Name:mk2f8f0495ba491dab5e08ca790f78097bcc62bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.209031   44266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.890fd0f4
	I0807 18:27:55.209049   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.890fd0f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.254]
	I0807 18:27:55.624285   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.890fd0f4 ...
	I0807 18:27:55.624314   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.890fd0f4: {Name:mkd68d7f250c70cd5fa8d28ad5bc1bbe0c86a654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.624461   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.890fd0f4 ...
	I0807 18:27:55.624473   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.890fd0f4: {Name:mkff3833d02b04ce9c36a734c937e13f709f80e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.624542   44266 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.890fd0f4 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt
	I0807 18:27:55.624619   44266 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.890fd0f4 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key
	I0807 18:27:55.624669   44266 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key
	I0807 18:27:55.624697   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt with IP's: []
	I0807 18:27:55.759073   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt ...
	I0807 18:27:55.759102   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt: {Name:mkb3499dae347a7cfa9dfc4b50cfa2f9ee673ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.759241   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key ...
	I0807 18:27:55.759251   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key: {Name:mk07d2a285004089dd73e71e881ed70e932c4b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:27:55.759316   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:27:55.759333   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:27:55.759346   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:27:55.759360   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:27:55.759372   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:27:55.759384   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:27:55.759400   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:27:55.759412   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:27:55.759461   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 18:27:55.759494   44266 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 18:27:55.759504   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 18:27:55.759526   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 18:27:55.759550   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 18:27:55.759571   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 18:27:55.759608   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:27:55.759632   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:27:55.759646   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 18:27:55.759658   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 18:27:55.760218   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:27:55.787520   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:27:55.815044   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:27:55.840056   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 18:27:55.867094   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 18:27:55.897468   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 18:27:55.954340   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:27:55.987648   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 18:27:56.013314   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:27:56.039452   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 18:27:56.065754   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 18:27:56.091942   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 18:27:56.111135   44266 ssh_runner.go:195] Run: openssl version
	I0807 18:27:56.117304   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:27:56.128689   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:27:56.133805   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:27:56.133860   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:27:56.140551   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:27:56.152253   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 18:27:56.164005   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 18:27:56.169124   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 18:27:56.169182   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 18:27:56.175462   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 18:27:56.187518   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 18:27:56.201710   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 18:27:56.206624   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 18:27:56.206673   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 18:27:56.212905   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:27:56.224505   44266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:27:56.229013   44266 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:27:56.229063   44266 kubeadm.go:392] StartCluster: {Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:27:56.229135   44266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 18:27:56.229186   44266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 18:27:56.270696   44266 cri.go:89] found id: ""
	I0807 18:27:56.270773   44266 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 18:27:56.281401   44266 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 18:27:56.291997   44266 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 18:27:56.302725   44266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 18:27:56.302745   44266 kubeadm.go:157] found existing configuration files:
	
	I0807 18:27:56.302792   44266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 18:27:56.312979   44266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 18:27:56.313046   44266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 18:27:56.323033   44266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 18:27:56.332468   44266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 18:27:56.332514   44266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 18:27:56.342420   44266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 18:27:56.351963   44266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 18:27:56.352032   44266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 18:27:56.361913   44266 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 18:27:56.371591   44266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 18:27:56.371657   44266 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 18:27:56.381281   44266 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 18:27:56.488327   44266 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 18:27:56.488440   44266 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 18:27:56.624126   44266 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 18:27:56.624281   44266 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 18:27:56.624494   44266 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 18:27:56.873034   44266 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 18:27:56.990825   44266 out.go:204]   - Generating certificates and keys ...
	I0807 18:27:56.990959   44266 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 18:27:56.991052   44266 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 18:27:57.066929   44266 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 18:27:57.283110   44266 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 18:27:57.486271   44266 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 18:27:57.678831   44266 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 18:27:57.750579   44266 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 18:27:57.750810   44266 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-198246 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0807 18:27:58.190149   44266 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 18:27:58.190378   44266 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-198246 localhost] and IPs [192.168.39.196 127.0.0.1 ::1]
	I0807 18:27:58.450761   44266 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 18:27:58.618895   44266 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 18:27:58.844633   44266 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 18:27:58.844738   44266 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 18:27:58.940356   44266 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 18:27:59.154431   44266 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 18:27:59.281640   44266 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 18:27:59.360167   44266 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 18:27:59.439806   44266 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 18:27:59.440368   44266 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 18:27:59.443581   44266 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 18:27:59.445418   44266 out.go:204]   - Booting up control plane ...
	I0807 18:27:59.445508   44266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 18:27:59.446192   44266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 18:27:59.446957   44266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 18:27:59.461414   44266 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 18:27:59.462292   44266 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 18:27:59.462339   44266 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 18:27:59.600411   44266 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 18:27:59.600545   44266 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 18:28:00.099690   44266 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.352376ms
	I0807 18:28:00.099804   44266 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 18:28:05.959621   44266 kubeadm.go:310] [api-check] The API server is healthy after 5.862299271s
	I0807 18:28:05.975846   44266 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 18:28:06.014624   44266 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 18:28:06.042297   44266 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 18:28:06.042535   44266 kubeadm.go:310] [mark-control-plane] Marking the node ha-198246 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 18:28:06.057030   44266 kubeadm.go:310] [bootstrap-token] Using token: acde14.b8y6evu3gygtakpe
	I0807 18:28:06.058575   44266 out.go:204]   - Configuring RBAC rules ...
	I0807 18:28:06.058714   44266 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 18:28:06.066217   44266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 18:28:06.080020   44266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 18:28:06.087791   44266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 18:28:06.092681   44266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 18:28:06.096020   44266 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 18:28:06.374616   44266 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 18:28:06.820636   44266 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 18:28:07.370115   44266 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 18:28:07.370141   44266 kubeadm.go:310] 
	I0807 18:28:07.370203   44266 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 18:28:07.370211   44266 kubeadm.go:310] 
	I0807 18:28:07.370295   44266 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 18:28:07.370303   44266 kubeadm.go:310] 
	I0807 18:28:07.370345   44266 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 18:28:07.370425   44266 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 18:28:07.370496   44266 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 18:28:07.370506   44266 kubeadm.go:310] 
	I0807 18:28:07.370578   44266 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 18:28:07.370587   44266 kubeadm.go:310] 
	I0807 18:28:07.370652   44266 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 18:28:07.370661   44266 kubeadm.go:310] 
	I0807 18:28:07.370747   44266 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 18:28:07.370856   44266 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 18:28:07.370953   44266 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 18:28:07.370964   44266 kubeadm.go:310] 
	I0807 18:28:07.371074   44266 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 18:28:07.371188   44266 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 18:28:07.371231   44266 kubeadm.go:310] 
	I0807 18:28:07.371348   44266 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token acde14.b8y6evu3gygtakpe \
	I0807 18:28:07.371521   44266 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 \
	I0807 18:28:07.371563   44266 kubeadm.go:310] 	--control-plane 
	I0807 18:28:07.371570   44266 kubeadm.go:310] 
	I0807 18:28:07.371677   44266 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 18:28:07.371685   44266 kubeadm.go:310] 
	I0807 18:28:07.371782   44266 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token acde14.b8y6evu3gygtakpe \
	I0807 18:28:07.371920   44266 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 
	I0807 18:28:07.372287   44266 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 18:28:07.372313   44266 cni.go:84] Creating CNI manager for ""
	I0807 18:28:07.372321   44266 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0807 18:28:07.374314   44266 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0807 18:28:07.375747   44266 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0807 18:28:07.381422   44266 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0807 18:28:07.381440   44266 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0807 18:28:07.402202   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0807 18:28:07.779600   44266 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 18:28:07.779665   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:07.779684   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198246 minikube.k8s.io/updated_at=2024_08_07T18_28_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-198246 minikube.k8s.io/primary=true
	I0807 18:28:07.798602   44266 ops.go:34] apiserver oom_adj: -16
	I0807 18:28:08.016757   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:08.517348   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:09.017382   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:09.516834   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:10.017237   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:10.517025   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:11.017393   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:11.517645   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:12.017006   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:12.517242   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:13.017450   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:13.517301   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:14.017420   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:14.517520   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:15.016948   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:15.517754   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:16.016906   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:16.517576   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:17.017692   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:17.517425   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:18.017663   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:18.517834   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:19.017814   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:19.516956   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:20.017249   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:20.516839   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 18:28:20.593055   44266 kubeadm.go:1113] duration metric: took 12.81344917s to wait for elevateKubeSystemPrivileges
	I0807 18:28:20.593093   44266 kubeadm.go:394] duration metric: took 24.364034512s to StartCluster
	I0807 18:28:20.593114   44266 settings.go:142] acquiring lock: {Name:mke44792daf8192c7cb4430e19df00c0686edd5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:20.593205   44266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:28:20.593898   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/kubeconfig: {Name:mk9a4ad53bf4447453626a7769211592f39f92fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:20.594131   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0807 18:28:20.594146   44266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 18:28:20.594202   44266 addons.go:69] Setting storage-provisioner=true in profile "ha-198246"
	I0807 18:28:20.594127   44266 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:28:20.594240   44266 start.go:241] waiting for startup goroutines ...
	I0807 18:28:20.594244   44266 addons.go:234] Setting addon storage-provisioner=true in "ha-198246"
	I0807 18:28:20.594253   44266 addons.go:69] Setting default-storageclass=true in profile "ha-198246"
	I0807 18:28:20.594272   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:28:20.594281   44266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-198246"
	I0807 18:28:20.594338   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:28:20.594625   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.594626   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.594650   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.594656   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.609354   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36055
	I0807 18:28:20.609414   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38699
	I0807 18:28:20.609790   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.609862   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.610266   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.610283   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.610389   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.610410   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.610618   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.610723   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.610793   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:28:20.611217   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.611247   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.612810   44266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:28:20.613017   44266 kapi.go:59] client config for ha-198246: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt", KeyFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key", CAFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0807 18:28:20.613524   44266 addons.go:234] Setting addon default-storageclass=true in "ha-198246"
	I0807 18:28:20.613554   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:28:20.613774   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.613789   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.613921   44266 cert_rotation.go:137] Starting client certificate rotation controller
	I0807 18:28:20.626271   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I0807 18:28:20.626822   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.627365   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.627390   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.627663   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.627835   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:28:20.628446   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0807 18:28:20.628810   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.629360   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.629383   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.629588   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:28:20.629689   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.630104   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:20.630142   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:20.631750   44266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 18:28:20.633155   44266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:28:20.633177   44266 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 18:28:20.633196   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:28:20.636123   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:20.636542   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:28:20.636568   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:20.636689   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:28:20.636880   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:28:20.637013   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:28:20.637127   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:28:20.646444   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38675
	I0807 18:28:20.646903   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:20.647381   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:20.647407   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:20.647701   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:20.647869   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:28:20.649389   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:28:20.649621   44266 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 18:28:20.649639   44266 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 18:28:20.649655   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:28:20.652497   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:20.652927   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:28:20.652955   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:20.653107   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:28:20.653303   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:28:20.653475   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:28:20.653622   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:28:20.708691   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0807 18:28:20.771288   44266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 18:28:20.803911   44266 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 18:28:21.019837   44266 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0807 18:28:21.230253   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.230275   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.230346   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.230366   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.230561   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.230578   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.230586   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.230594   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.230677   44266 main.go:141] libmachine: (ha-198246) DBG | Closing plugin on server side
	I0807 18:28:21.230683   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.230695   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.230703   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.230710   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.232188   44266 main.go:141] libmachine: (ha-198246) DBG | Closing plugin on server side
	I0807 18:28:21.232193   44266 main.go:141] libmachine: (ha-198246) DBG | Closing plugin on server side
	I0807 18:28:21.232228   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.232229   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.232250   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.232250   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.232412   44266 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0807 18:28:21.232423   44266 round_trippers.go:469] Request Headers:
	I0807 18:28:21.232433   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:28:21.232442   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:28:21.244939   44266 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0807 18:28:21.245737   44266 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0807 18:28:21.245758   44266 round_trippers.go:469] Request Headers:
	I0807 18:28:21.245768   44266 round_trippers.go:473]     Content-Type: application/json
	I0807 18:28:21.245779   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:28:21.245784   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:28:21.248031   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:28:21.248194   44266 main.go:141] libmachine: Making call to close driver server
	I0807 18:28:21.248228   44266 main.go:141] libmachine: (ha-198246) Calling .Close
	I0807 18:28:21.248571   44266 main.go:141] libmachine: (ha-198246) DBG | Closing plugin on server side
	I0807 18:28:21.248579   44266 main.go:141] libmachine: Successfully made call to close driver server
	I0807 18:28:21.248602   44266 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 18:28:21.251329   44266 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0807 18:28:21.252717   44266 addons.go:510] duration metric: took 658.567856ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0807 18:28:21.252752   44266 start.go:246] waiting for cluster config update ...
	I0807 18:28:21.252766   44266 start.go:255] writing updated cluster config ...
	I0807 18:28:21.254496   44266 out.go:177] 
	I0807 18:28:21.255798   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:28:21.255869   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:28:21.257396   44266 out.go:177] * Starting "ha-198246-m02" control-plane node in "ha-198246" cluster
	I0807 18:28:21.258544   44266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:28:21.258563   44266 cache.go:56] Caching tarball of preloaded images
	I0807 18:28:21.258651   44266 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 18:28:21.258666   44266 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 18:28:21.258740   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:28:21.258909   44266 start.go:360] acquireMachinesLock for ha-198246-m02: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:28:21.258952   44266 start.go:364] duration metric: took 24.011µs to acquireMachinesLock for "ha-198246-m02"
	I0807 18:28:21.258975   44266 start.go:93] Provisioning new machine with config: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:28:21.259059   44266 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0807 18:28:21.260585   44266 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:28:21.260664   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:21.260685   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:21.274940   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0807 18:28:21.275393   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:21.275883   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:21.275912   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:21.276238   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:21.276417   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetMachineName
	I0807 18:28:21.276559   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:21.276724   44266 start.go:159] libmachine.API.Create for "ha-198246" (driver="kvm2")
	I0807 18:28:21.276747   44266 client.go:168] LocalClient.Create starting
	I0807 18:28:21.276782   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem
	I0807 18:28:21.276821   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:28:21.276844   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:28:21.276909   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem
	I0807 18:28:21.276934   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:28:21.276948   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:28:21.276978   44266 main.go:141] libmachine: Running pre-create checks...
	I0807 18:28:21.276990   44266 main.go:141] libmachine: (ha-198246-m02) Calling .PreCreateCheck
	I0807 18:28:21.277160   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetConfigRaw
	I0807 18:28:21.277503   44266 main.go:141] libmachine: Creating machine...
	I0807 18:28:21.277517   44266 main.go:141] libmachine: (ha-198246-m02) Calling .Create
	I0807 18:28:21.277664   44266 main.go:141] libmachine: (ha-198246-m02) Creating KVM machine...
	I0807 18:28:21.278838   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found existing default KVM network
	I0807 18:28:21.278997   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found existing private KVM network mk-ha-198246
	I0807 18:28:21.279157   44266 main.go:141] libmachine: (ha-198246-m02) Setting up store path in /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02 ...
	I0807 18:28:21.279196   44266 main.go:141] libmachine: (ha-198246-m02) Building disk image from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 18:28:21.279252   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:21.279170   44685 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:28:21.279333   44266 main.go:141] libmachine: (ha-198246-m02) Downloading /home/jenkins/minikube-integration/19389-20864/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:28:21.511603   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:21.511453   44685 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa...
	I0807 18:28:21.728136   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:21.727998   44685 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/ha-198246-m02.rawdisk...
	I0807 18:28:21.728162   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Writing magic tar header
	I0807 18:28:21.728173   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Writing SSH key tar header
	I0807 18:28:21.728181   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:21.728108   44685 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02 ...
	I0807 18:28:21.728242   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02
	I0807 18:28:21.728271   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02 (perms=drwx------)
	I0807 18:28:21.728290   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines
	I0807 18:28:21.728311   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:28:21.728325   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864
	I0807 18:28:21.728339   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines (perms=drwxr-xr-x)
	I0807 18:28:21.728355   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0807 18:28:21.728367   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube (perms=drwxr-xr-x)
	I0807 18:28:21.728379   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home/jenkins
	I0807 18:28:21.728394   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Checking permissions on dir: /home
	I0807 18:28:21.728405   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Skipping /home - not owner
	I0807 18:28:21.728422   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864 (perms=drwxrwxr-x)
	I0807 18:28:21.728437   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0807 18:28:21.728467   44266 main.go:141] libmachine: (ha-198246-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0807 18:28:21.728487   44266 main.go:141] libmachine: (ha-198246-m02) Creating domain...
	I0807 18:28:21.729349   44266 main.go:141] libmachine: (ha-198246-m02) define libvirt domain using xml: 
	I0807 18:28:21.729400   44266 main.go:141] libmachine: (ha-198246-m02) <domain type='kvm'>
	I0807 18:28:21.729415   44266 main.go:141] libmachine: (ha-198246-m02)   <name>ha-198246-m02</name>
	I0807 18:28:21.729428   44266 main.go:141] libmachine: (ha-198246-m02)   <memory unit='MiB'>2200</memory>
	I0807 18:28:21.729439   44266 main.go:141] libmachine: (ha-198246-m02)   <vcpu>2</vcpu>
	I0807 18:28:21.729455   44266 main.go:141] libmachine: (ha-198246-m02)   <features>
	I0807 18:28:21.729466   44266 main.go:141] libmachine: (ha-198246-m02)     <acpi/>
	I0807 18:28:21.729474   44266 main.go:141] libmachine: (ha-198246-m02)     <apic/>
	I0807 18:28:21.729486   44266 main.go:141] libmachine: (ha-198246-m02)     <pae/>
	I0807 18:28:21.729493   44266 main.go:141] libmachine: (ha-198246-m02)     
	I0807 18:28:21.729502   44266 main.go:141] libmachine: (ha-198246-m02)   </features>
	I0807 18:28:21.729510   44266 main.go:141] libmachine: (ha-198246-m02)   <cpu mode='host-passthrough'>
	I0807 18:28:21.729518   44266 main.go:141] libmachine: (ha-198246-m02)   
	I0807 18:28:21.729530   44266 main.go:141] libmachine: (ha-198246-m02)   </cpu>
	I0807 18:28:21.729541   44266 main.go:141] libmachine: (ha-198246-m02)   <os>
	I0807 18:28:21.729550   44266 main.go:141] libmachine: (ha-198246-m02)     <type>hvm</type>
	I0807 18:28:21.729563   44266 main.go:141] libmachine: (ha-198246-m02)     <boot dev='cdrom'/>
	I0807 18:28:21.729574   44266 main.go:141] libmachine: (ha-198246-m02)     <boot dev='hd'/>
	I0807 18:28:21.729587   44266 main.go:141] libmachine: (ha-198246-m02)     <bootmenu enable='no'/>
	I0807 18:28:21.729597   44266 main.go:141] libmachine: (ha-198246-m02)   </os>
	I0807 18:28:21.729627   44266 main.go:141] libmachine: (ha-198246-m02)   <devices>
	I0807 18:28:21.729665   44266 main.go:141] libmachine: (ha-198246-m02)     <disk type='file' device='cdrom'>
	I0807 18:28:21.729686   44266 main.go:141] libmachine: (ha-198246-m02)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/boot2docker.iso'/>
	I0807 18:28:21.729699   44266 main.go:141] libmachine: (ha-198246-m02)       <target dev='hdc' bus='scsi'/>
	I0807 18:28:21.729711   44266 main.go:141] libmachine: (ha-198246-m02)       <readonly/>
	I0807 18:28:21.729721   44266 main.go:141] libmachine: (ha-198246-m02)     </disk>
	I0807 18:28:21.729734   44266 main.go:141] libmachine: (ha-198246-m02)     <disk type='file' device='disk'>
	I0807 18:28:21.729748   44266 main.go:141] libmachine: (ha-198246-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0807 18:28:21.729765   44266 main.go:141] libmachine: (ha-198246-m02)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/ha-198246-m02.rawdisk'/>
	I0807 18:28:21.729773   44266 main.go:141] libmachine: (ha-198246-m02)       <target dev='hda' bus='virtio'/>
	I0807 18:28:21.729781   44266 main.go:141] libmachine: (ha-198246-m02)     </disk>
	I0807 18:28:21.729789   44266 main.go:141] libmachine: (ha-198246-m02)     <interface type='network'>
	I0807 18:28:21.729799   44266 main.go:141] libmachine: (ha-198246-m02)       <source network='mk-ha-198246'/>
	I0807 18:28:21.729807   44266 main.go:141] libmachine: (ha-198246-m02)       <model type='virtio'/>
	I0807 18:28:21.729816   44266 main.go:141] libmachine: (ha-198246-m02)     </interface>
	I0807 18:28:21.729832   44266 main.go:141] libmachine: (ha-198246-m02)     <interface type='network'>
	I0807 18:28:21.729845   44266 main.go:141] libmachine: (ha-198246-m02)       <source network='default'/>
	I0807 18:28:21.729856   44266 main.go:141] libmachine: (ha-198246-m02)       <model type='virtio'/>
	I0807 18:28:21.729868   44266 main.go:141] libmachine: (ha-198246-m02)     </interface>
	I0807 18:28:21.729875   44266 main.go:141] libmachine: (ha-198246-m02)     <serial type='pty'>
	I0807 18:28:21.729887   44266 main.go:141] libmachine: (ha-198246-m02)       <target port='0'/>
	I0807 18:28:21.729895   44266 main.go:141] libmachine: (ha-198246-m02)     </serial>
	I0807 18:28:21.729923   44266 main.go:141] libmachine: (ha-198246-m02)     <console type='pty'>
	I0807 18:28:21.729945   44266 main.go:141] libmachine: (ha-198246-m02)       <target type='serial' port='0'/>
	I0807 18:28:21.729956   44266 main.go:141] libmachine: (ha-198246-m02)     </console>
	I0807 18:28:21.729961   44266 main.go:141] libmachine: (ha-198246-m02)     <rng model='virtio'>
	I0807 18:28:21.729976   44266 main.go:141] libmachine: (ha-198246-m02)       <backend model='random'>/dev/random</backend>
	I0807 18:28:21.729987   44266 main.go:141] libmachine: (ha-198246-m02)     </rng>
	I0807 18:28:21.729995   44266 main.go:141] libmachine: (ha-198246-m02)     
	I0807 18:28:21.730002   44266 main.go:141] libmachine: (ha-198246-m02)     
	I0807 18:28:21.730010   44266 main.go:141] libmachine: (ha-198246-m02)   </devices>
	I0807 18:28:21.730016   44266 main.go:141] libmachine: (ha-198246-m02) </domain>
	I0807 18:28:21.730025   44266 main.go:141] libmachine: (ha-198246-m02) 
	I0807 18:28:21.736803   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:b2:15:7e in network default
	I0807 18:28:21.737390   44266 main.go:141] libmachine: (ha-198246-m02) Ensuring networks are active...
	I0807 18:28:21.737416   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:21.738108   44266 main.go:141] libmachine: (ha-198246-m02) Ensuring network default is active
	I0807 18:28:21.738450   44266 main.go:141] libmachine: (ha-198246-m02) Ensuring network mk-ha-198246 is active
	I0807 18:28:21.738836   44266 main.go:141] libmachine: (ha-198246-m02) Getting domain xml...
	I0807 18:28:21.739511   44266 main.go:141] libmachine: (ha-198246-m02) Creating domain...
	I0807 18:28:22.980593   44266 main.go:141] libmachine: (ha-198246-m02) Waiting to get IP...
	I0807 18:28:22.981319   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:22.981653   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:22.981678   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:22.981626   44685 retry.go:31] will retry after 277.857687ms: waiting for machine to come up
	I0807 18:28:23.261356   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:23.261928   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:23.261955   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:23.261836   44685 retry.go:31] will retry after 296.896309ms: waiting for machine to come up
	I0807 18:28:23.560474   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:23.560953   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:23.560974   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:23.560905   44685 retry.go:31] will retry after 431.200025ms: waiting for machine to come up
	I0807 18:28:23.993408   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:23.993831   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:23.993860   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:23.993783   44685 retry.go:31] will retry after 489.747622ms: waiting for machine to come up
	I0807 18:28:24.485553   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:24.486096   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:24.486118   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:24.486038   44685 retry.go:31] will retry after 595.37365ms: waiting for machine to come up
	I0807 18:28:25.082858   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:25.083273   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:25.083297   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:25.083229   44685 retry.go:31] will retry after 864.817898ms: waiting for machine to come up
	I0807 18:28:25.949301   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:25.949755   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:25.949787   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:25.949705   44685 retry.go:31] will retry after 980.056682ms: waiting for machine to come up
	I0807 18:28:26.931211   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:26.931633   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:26.931667   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:26.931574   44685 retry.go:31] will retry after 1.374312311s: waiting for machine to come up
	I0807 18:28:28.308159   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:28.308539   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:28.308588   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:28.308503   44685 retry.go:31] will retry after 1.32565444s: waiting for machine to come up
	I0807 18:28:29.635739   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:29.636210   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:29.636236   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:29.636128   44685 retry.go:31] will retry after 2.094612533s: waiting for machine to come up
	I0807 18:28:31.731860   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:31.732338   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:31.732366   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:31.732297   44685 retry.go:31] will retry after 2.384083205s: waiting for machine to come up
	I0807 18:28:34.117344   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:34.117765   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:34.117789   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:34.117726   44685 retry.go:31] will retry after 3.244651745s: waiting for machine to come up
	I0807 18:28:37.364060   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:37.364496   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find current IP address of domain ha-198246-m02 in network mk-ha-198246
	I0807 18:28:37.364524   44266 main.go:141] libmachine: (ha-198246-m02) DBG | I0807 18:28:37.364456   44685 retry.go:31] will retry after 3.883256435s: waiting for machine to come up
	I0807 18:28:41.249166   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.249651   44266 main.go:141] libmachine: (ha-198246-m02) Found IP for machine: 192.168.39.251
	I0807 18:28:41.249682   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has current primary IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.249838   44266 main.go:141] libmachine: (ha-198246-m02) Reserving static IP address...
	I0807 18:28:41.250128   44266 main.go:141] libmachine: (ha-198246-m02) DBG | unable to find host DHCP lease matching {name: "ha-198246-m02", mac: "52:54:00:c8:91:fc", ip: "192.168.39.251"} in network mk-ha-198246
	I0807 18:28:41.326471   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Getting to WaitForSSH function...
	I0807 18:28:41.326495   44266 main.go:141] libmachine: (ha-198246-m02) Reserved static IP address: 192.168.39.251
	I0807 18:28:41.326537   44266 main.go:141] libmachine: (ha-198246-m02) Waiting for SSH to be available...
	I0807 18:28:41.329224   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.329503   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.329528   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.329675   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Using SSH client type: external
	I0807 18:28:41.329699   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa (-rw-------)
	I0807 18:28:41.329726   44266 main.go:141] libmachine: (ha-198246-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.251 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:28:41.329738   44266 main.go:141] libmachine: (ha-198246-m02) DBG | About to run SSH command:
	I0807 18:28:41.329752   44266 main.go:141] libmachine: (ha-198246-m02) DBG | exit 0
	I0807 18:28:41.456688   44266 main.go:141] libmachine: (ha-198246-m02) DBG | SSH cmd err, output: <nil>: 
	I0807 18:28:41.457054   44266 main.go:141] libmachine: (ha-198246-m02) KVM machine creation complete!
	I0807 18:28:41.457342   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetConfigRaw
	I0807 18:28:41.457876   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:41.458082   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:41.458245   44266 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 18:28:41.458260   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:28:41.459550   44266 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 18:28:41.459565   44266 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 18:28:41.459572   44266 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 18:28:41.459578   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.461855   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.462198   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.462225   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.462361   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:41.462552   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.462697   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.462811   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:41.463068   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:41.463266   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:41.463277   44266 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 18:28:41.563789   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:28:41.563815   44266 main.go:141] libmachine: Detecting the provisioner...
	I0807 18:28:41.563825   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.566883   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.567241   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.567263   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.567445   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:41.567660   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.567837   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.567975   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:41.568253   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:41.568452   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:41.568470   44266 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 18:28:41.669364   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 18:28:41.669425   44266 main.go:141] libmachine: found compatible host: buildroot
	I0807 18:28:41.669432   44266 main.go:141] libmachine: Provisioning with buildroot...
	I0807 18:28:41.669440   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetMachineName
	I0807 18:28:41.669653   44266 buildroot.go:166] provisioning hostname "ha-198246-m02"
	I0807 18:28:41.669679   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetMachineName
	I0807 18:28:41.669860   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.672464   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.672770   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.672793   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.672942   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:41.673104   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.673265   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.673412   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:41.673627   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:41.673943   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:41.673966   44266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198246-m02 && echo "ha-198246-m02" | sudo tee /etc/hostname
	I0807 18:28:41.792440   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246-m02
	
	I0807 18:28:41.792466   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.795604   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.795966   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.795984   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.796230   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:41.796424   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.796595   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:41.796740   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:41.796885   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:41.797037   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:41.797053   44266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198246-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198246-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198246-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:28:41.906596   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:28:41.906633   44266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 18:28:41.906652   44266 buildroot.go:174] setting up certificates
	I0807 18:28:41.906662   44266 provision.go:84] configureAuth start
	I0807 18:28:41.906670   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetMachineName
	I0807 18:28:41.906995   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:28:41.909871   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.910201   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.910258   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.910405   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:41.912630   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.912923   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:41.912952   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:41.913098   44266 provision.go:143] copyHostCerts
	I0807 18:28:41.913133   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:28:41.913171   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 18:28:41.913181   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:28:41.913262   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 18:28:41.913348   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:28:41.913371   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 18:28:41.913380   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:28:41.913419   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 18:28:41.913479   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:28:41.913502   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 18:28:41.913510   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:28:41.913543   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 18:28:41.913607   44266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.ha-198246-m02 san=[127.0.0.1 192.168.39.251 ha-198246-m02 localhost minikube]
	I0807 18:28:42.029415   44266 provision.go:177] copyRemoteCerts
	I0807 18:28:42.029466   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:28:42.029488   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.031816   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.032108   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.032134   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.032373   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.032590   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.032761   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.032906   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:28:42.115186   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 18:28:42.115248   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 18:28:42.139771   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 18:28:42.139888   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 18:28:42.166463   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 18:28:42.166547   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:28:42.191360   44266 provision.go:87] duration metric: took 284.686105ms to configureAuth
	I0807 18:28:42.191394   44266 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:28:42.191575   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:28:42.191639   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.194385   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.194831   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.194853   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.195191   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.195376   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.195544   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.195680   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.195895   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:42.196044   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:42.196058   44266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 18:28:42.467289   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 18:28:42.467319   44266 main.go:141] libmachine: Checking connection to Docker...
	I0807 18:28:42.467328   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetURL
	I0807 18:28:42.468563   44266 main.go:141] libmachine: (ha-198246-m02) DBG | Using libvirt version 6000000
	I0807 18:28:42.470865   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.471205   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.471243   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.471411   44266 main.go:141] libmachine: Docker is up and running!
	I0807 18:28:42.471434   44266 main.go:141] libmachine: Reticulating splines...
	I0807 18:28:42.471451   44266 client.go:171] duration metric: took 21.19468682s to LocalClient.Create
	I0807 18:28:42.471481   44266 start.go:167] duration metric: took 21.194756451s to libmachine.API.Create "ha-198246"
	I0807 18:28:42.471493   44266 start.go:293] postStartSetup for "ha-198246-m02" (driver="kvm2")
	I0807 18:28:42.471507   44266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:28:42.471534   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.471773   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:28:42.471806   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.474080   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.474413   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.474433   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.474545   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.474739   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.474895   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.475097   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:28:42.560490   44266 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:28:42.565161   44266 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:28:42.565195   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 18:28:42.565275   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 18:28:42.565387   44266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 18:28:42.565402   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 18:28:42.565531   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:28:42.576441   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:28:42.601887   44266 start.go:296] duration metric: took 130.379831ms for postStartSetup
	I0807 18:28:42.601945   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetConfigRaw
	I0807 18:28:42.602524   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:28:42.605525   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.605930   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.605957   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.606232   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:28:42.606422   44266 start.go:128] duration metric: took 21.347355066s to createHost
	I0807 18:28:42.606445   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.608659   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.609011   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.609037   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.609154   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.609339   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.609509   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.609670   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.609881   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:28:42.610037   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.251 22 <nil> <nil>}
	I0807 18:28:42.610048   44266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:28:42.712453   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055322.681885034
	
	I0807 18:28:42.712476   44266 fix.go:216] guest clock: 1723055322.681885034
	I0807 18:28:42.712486   44266 fix.go:229] Guest: 2024-08-07 18:28:42.681885034 +0000 UTC Remote: 2024-08-07 18:28:42.606435256 +0000 UTC m=+80.917455918 (delta=75.449778ms)
	I0807 18:28:42.712505   44266 fix.go:200] guest clock delta is within tolerance: 75.449778ms
	I0807 18:28:42.712511   44266 start.go:83] releasing machines lock for "ha-198246-m02", held for 21.453548489s
	I0807 18:28:42.712528   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.712799   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:28:42.715436   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.715971   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.716003   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.718499   44266 out.go:177] * Found network options:
	I0807 18:28:42.719912   44266 out.go:177]   - NO_PROXY=192.168.39.196
	W0807 18:28:42.721156   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:28:42.721186   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.721776   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.721994   44266 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:28:42.722091   44266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 18:28:42.722129   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	W0807 18:28:42.722390   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:28:42.722461   44266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 18:28:42.722484   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:28:42.724944   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.725052   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.725311   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.725352   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.725460   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.725478   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:42.725500   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:42.725609   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.725652   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:28:42.725814   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.725827   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:28:42.725956   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:28:42.725974   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:28:42.726095   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:28:42.958805   44266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:28:42.964806   44266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:28:42.964894   44266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:28:42.981388   44266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:28:42.981416   44266 start.go:495] detecting cgroup driver to use...
	I0807 18:28:42.981488   44266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:28:42.997458   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:28:43.012016   44266 docker.go:217] disabling cri-docker service (if available) ...
	I0807 18:28:43.012089   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 18:28:43.025912   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 18:28:43.039739   44266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 18:28:43.155400   44266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 18:28:43.303225   44266 docker.go:233] disabling docker service ...
	I0807 18:28:43.303286   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 18:28:43.318739   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 18:28:43.332532   44266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 18:28:43.472596   44266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 18:28:43.605966   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 18:28:43.619925   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:28:43.638588   44266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 18:28:43.638650   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.649283   44266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 18:28:43.649357   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.659951   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.670486   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.680962   44266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:28:43.691796   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.702576   44266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.720080   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:28:43.730366   44266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:28:43.739403   44266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 18:28:43.739465   44266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 18:28:43.752984   44266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:28:43.764481   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:28:43.897332   44266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 18:28:44.051283   44266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 18:28:44.051350   44266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 18:28:44.056138   44266 start.go:563] Will wait 60s for crictl version
	I0807 18:28:44.056186   44266 ssh_runner.go:195] Run: which crictl
	I0807 18:28:44.060100   44266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:28:44.107041   44266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 18:28:44.107160   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:28:44.136233   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:28:44.172438   44266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 18:28:44.174040   44266 out.go:177]   - env NO_PROXY=192.168.39.196
	I0807 18:28:44.175421   44266 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:28:44.178934   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:44.179638   44266 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:28:36 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:28:44.179664   44266 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:28:44.179936   44266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 18:28:44.184425   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:28:44.197404   44266 mustload.go:65] Loading cluster: ha-198246
	I0807 18:28:44.197592   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:28:44.197871   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:44.197898   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:44.212129   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0807 18:28:44.212590   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:44.213046   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:44.213066   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:44.213444   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:44.213618   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:28:44.215209   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:28:44.215490   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:28:44.215512   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:28:44.229524   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34027
	I0807 18:28:44.229880   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:28:44.230343   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:28:44.230365   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:28:44.230754   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:28:44.230920   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:28:44.231062   44266 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246 for IP: 192.168.39.251
	I0807 18:28:44.231075   44266 certs.go:194] generating shared ca certs ...
	I0807 18:28:44.231089   44266 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:44.231203   44266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 18:28:44.231239   44266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 18:28:44.231248   44266 certs.go:256] generating profile certs ...
	I0807 18:28:44.231307   44266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key
	I0807 18:28:44.231330   44266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.f3bca680
	I0807 18:28:44.231342   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.f3bca680 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.251 192.168.39.254]
	I0807 18:28:44.559979   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.f3bca680 ...
	I0807 18:28:44.560015   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.f3bca680: {Name:mk532d2b707d0b4ff2030a049398865e8e454aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:44.560219   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.f3bca680 ...
	I0807 18:28:44.560234   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.f3bca680: {Name:mkd4bd0dec009d42e6ef356f3ddf31b6cb75091b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:28:44.560311   44266 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.f3bca680 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt
	I0807 18:28:44.560448   44266 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.f3bca680 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key
	I0807 18:28:44.560582   44266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key
	I0807 18:28:44.560598   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:28:44.560612   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:28:44.560628   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:28:44.560643   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:28:44.560658   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:28:44.560672   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:28:44.560687   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:28:44.560701   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:28:44.560749   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 18:28:44.560780   44266 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 18:28:44.560791   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 18:28:44.560824   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 18:28:44.560849   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 18:28:44.560873   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 18:28:44.560916   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:28:44.560944   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 18:28:44.560960   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 18:28:44.560975   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:28:44.561016   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:28:44.564346   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:44.564706   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:28:44.564750   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:28:44.564946   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:28:44.565119   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:28:44.565260   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:28:44.565409   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:28:44.636542   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0807 18:28:44.641908   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0807 18:28:44.653641   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0807 18:28:44.658316   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0807 18:28:44.670858   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0807 18:28:44.676928   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0807 18:28:44.688768   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0807 18:28:44.693467   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0807 18:28:44.706551   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0807 18:28:44.711030   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0807 18:28:44.721174   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0807 18:28:44.725233   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0807 18:28:44.735693   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:28:44.760511   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:28:44.784292   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:28:44.808965   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 18:28:44.832316   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0807 18:28:44.855893   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 18:28:44.879876   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:28:44.903976   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 18:28:44.927934   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 18:28:44.951549   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 18:28:44.976102   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:28:45.000438   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0807 18:28:45.017427   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0807 18:28:45.034095   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0807 18:28:45.051357   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0807 18:28:45.068164   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0807 18:28:45.084903   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0807 18:28:45.102266   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0807 18:28:45.119661   44266 ssh_runner.go:195] Run: openssl version
	I0807 18:28:45.125624   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 18:28:45.137494   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 18:28:45.142382   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 18:28:45.142458   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 18:28:45.148370   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 18:28:45.159984   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 18:28:45.171372   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 18:28:45.176093   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 18:28:45.176163   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 18:28:45.182048   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:28:45.193770   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:28:45.205285   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:28:45.209824   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:28:45.209886   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:28:45.215494   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:28:45.226843   44266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:28:45.231043   44266 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:28:45.231100   44266 kubeadm.go:934] updating node {m02 192.168.39.251 8443 v1.30.3 crio true true} ...
	I0807 18:28:45.231200   44266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198246-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.251
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:28:45.231226   44266 kube-vip.go:115] generating kube-vip config ...
	I0807 18:28:45.231271   44266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:28:45.250153   44266 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:28:45.250214   44266 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:28:45.250259   44266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:28:45.260907   44266 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0807 18:28:45.260967   44266 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0807 18:28:45.270880   44266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0807 18:28:45.270914   44266 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0807 18:28:45.270924   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:28:45.270930   44266 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0807 18:28:45.270992   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:28:45.275789   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 18:28:45.275817   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0807 18:29:16.535217   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:29:16.535295   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:29:16.541424   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 18:29:16.541476   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0807 18:29:46.649609   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:29:46.666629   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:29:46.666743   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:29:46.671680   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 18:29:46.671715   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0807 18:29:47.065172   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0807 18:29:47.074897   44266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0807 18:29:47.091782   44266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:29:47.108598   44266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:29:47.125245   44266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:29:47.129682   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:29:47.142072   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:29:47.274574   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:29:47.291877   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:29:47.292235   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:29:47.292273   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:29:47.307242   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39989
	I0807 18:29:47.307720   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:29:47.308297   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:29:47.308318   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:29:47.308692   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:29:47.308878   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:29:47.309043   44266 start.go:317] joinCluster: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:29:47.309174   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0807 18:29:47.309196   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:29:47.312164   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:29:47.312576   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:29:47.312598   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:29:47.312741   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:29:47.312882   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:29:47.313026   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:29:47.313145   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:29:47.474155   44266 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:29:47.474212   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 78qnj6.khw60v382x7suzf2 --discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-198246-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443"
	I0807 18:30:09.531673   44266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 78qnj6.khw60v382x7suzf2 --discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-198246-m02 --control-plane --apiserver-advertise-address=192.168.39.251 --apiserver-bind-port=8443": (22.057432801s)
	I0807 18:30:09.531712   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0807 18:30:10.063192   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198246-m02 minikube.k8s.io/updated_at=2024_08_07T18_30_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-198246 minikube.k8s.io/primary=false
	I0807 18:30:10.185705   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198246-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0807 18:30:10.289663   44266 start.go:319] duration metric: took 22.980616289s to joinCluster
	I0807 18:30:10.289758   44266 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:30:10.290021   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:30:10.291609   44266 out.go:177] * Verifying Kubernetes components...
	I0807 18:30:10.293124   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:30:10.576909   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:30:10.661898   44266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:30:10.662179   44266 kapi.go:59] client config for ha-198246: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt", KeyFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key", CAFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0807 18:30:10.662254   44266 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.196:8443
	I0807 18:30:10.662605   44266 node_ready.go:35] waiting up to 6m0s for node "ha-198246-m02" to be "Ready" ...
	I0807 18:30:10.662743   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:10.662754   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:10.662761   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:10.662767   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:10.674699   44266 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0807 18:30:11.163679   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:11.163703   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:11.163714   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:11.163720   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:11.167619   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:11.662940   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:11.662963   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:11.662969   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:11.662974   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:11.667816   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:12.163352   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:12.163385   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:12.163396   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:12.163402   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:12.172406   44266 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:30:12.662926   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:12.662951   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:12.662956   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:12.662961   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:12.667516   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:12.668218   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:13.163680   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:13.163706   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:13.163714   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:13.163719   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:13.168225   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:13.663108   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:13.663128   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:13.663136   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:13.663140   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:13.667026   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:14.163223   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:14.163246   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:14.163255   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:14.163263   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:14.167544   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:14.663519   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:14.663545   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:14.663556   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:14.663562   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:14.667595   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:15.163698   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:15.163725   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:15.163738   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:15.163746   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:15.167266   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:15.167770   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:15.662914   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:15.662941   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:15.662951   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:15.662957   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:15.666490   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:16.163391   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:16.163419   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:16.163429   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:16.163435   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:16.167182   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:16.663906   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:16.663932   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:16.663943   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:16.663948   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:16.668176   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:17.162965   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:17.163049   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:17.163068   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:17.163080   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:17.167431   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:17.168285   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:17.663405   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:17.663429   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:17.663440   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:17.663447   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:17.667918   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:18.162918   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:18.162941   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:18.162949   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:18.162953   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:18.166467   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:18.663722   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:18.663747   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:18.663757   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:18.663762   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:18.668276   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:19.162945   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:19.162966   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:19.162973   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:19.162978   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:19.166353   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:19.663155   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:19.663178   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:19.663187   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:19.663192   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:19.667298   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:19.668045   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:20.163460   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:20.163483   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:20.163490   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:20.163493   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:20.166765   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:20.662987   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:20.663010   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:20.663021   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:20.663027   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:20.666411   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:21.163219   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:21.163241   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:21.163249   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:21.163252   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:21.167248   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:21.663138   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:21.663163   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:21.663171   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:21.663177   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:21.666630   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:22.163469   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:22.163496   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:22.163507   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:22.163513   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:22.166849   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:22.167436   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:22.663333   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:22.663354   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:22.663364   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:22.663369   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:22.667068   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:23.163229   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:23.163251   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:23.163259   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:23.163263   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:23.167845   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:23.663204   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:23.663224   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:23.663232   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:23.663236   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:23.667349   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:24.163700   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:24.163721   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:24.163727   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:24.163730   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:24.166893   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:24.167762   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:24.663135   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:24.663185   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:24.663196   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:24.663200   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:24.667220   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:25.163134   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:25.163158   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:25.163167   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:25.163171   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:25.167004   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:25.663113   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:25.663133   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:25.663141   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:25.663144   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:25.666348   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:26.162882   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:26.162907   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:26.162918   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:26.162923   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:26.166232   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:26.663636   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:26.663655   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:26.663663   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:26.663668   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:26.668096   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:26.668860   44266 node_ready.go:53] node "ha-198246-m02" has status "Ready":"False"
	I0807 18:30:27.162906   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:27.162932   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:27.162956   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:27.162961   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:27.166246   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:27.662814   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:27.662838   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:27.662849   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:27.662855   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:27.666937   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:28.163385   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:28.163408   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:28.163419   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:28.163425   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:28.166996   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:28.663197   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:28.663220   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:28.663227   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:28.663231   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:28.666970   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.163041   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:29.163064   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.163072   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.163077   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.166751   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.167186   44266 node_ready.go:49] node "ha-198246-m02" has status "Ready":"True"
	I0807 18:30:29.167202   44266 node_ready.go:38] duration metric: took 18.504556301s for node "ha-198246-m02" to be "Ready" ...
	I0807 18:30:29.167209   44266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:30:29.167270   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:29.167282   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.167291   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.167298   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.172316   44266 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:30:29.179632   44266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.179698   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rbnrx
	I0807 18:30:29.179705   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.179713   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.179716   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.182900   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.183633   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.183648   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.183658   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.183664   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.186729   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.187287   44266 pod_ready.go:92] pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.187309   44266 pod_ready.go:81] duration metric: took 7.655346ms for pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.187321   44266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.187382   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w6w6g
	I0807 18:30:29.187393   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.187403   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.187407   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.190210   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.190826   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.190843   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.190852   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.190860   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.193702   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.194277   44266 pod_ready.go:92] pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.194298   44266 pod_ready.go:81] duration metric: took 6.969332ms for pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.194310   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.194367   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246
	I0807 18:30:29.194377   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.194385   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.194388   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.197299   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.197836   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.197850   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.197857   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.197862   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.200079   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.200638   44266 pod_ready.go:92] pod "etcd-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.200658   44266 pod_ready.go:81] duration metric: took 6.339465ms for pod "etcd-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.200671   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.200727   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246-m02
	I0807 18:30:29.200736   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.200746   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.200754   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.202945   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.203877   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:29.203893   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.203901   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.203907   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.205928   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:29.206548   44266 pod_ready.go:92] pod "etcd-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.206567   44266 pod_ready.go:81] duration metric: took 5.88553ms for pod "etcd-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.206585   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.363426   44266 request.go:629] Waited for 156.781521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246
	I0807 18:30:29.363516   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246
	I0807 18:30:29.363524   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.363536   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.363544   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.367464   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.563699   44266 request.go:629] Waited for 195.39754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.563755   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:29.563761   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.563771   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.563776   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.568245   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:29.568903   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.568920   44266 pod_ready.go:81] duration metric: took 362.325252ms for pod "kube-apiserver-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.568929   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.764067   44266 request.go:629] Waited for 195.080715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m02
	I0807 18:30:29.764156   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m02
	I0807 18:30:29.764163   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.764175   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.764183   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.767929   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.964048   44266 request.go:629] Waited for 195.354316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:29.964123   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:29.964133   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:29.964143   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:29.964148   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:29.967577   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:29.968090   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:29.968117   44266 pod_ready.go:81] duration metric: took 399.182286ms for pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:29.968126   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.163142   44266 request.go:629] Waited for 194.953767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246
	I0807 18:30:30.163221   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246
	I0807 18:30:30.163231   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.163244   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.163253   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.166706   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:30.363808   44266 request.go:629] Waited for 196.398052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:30.363874   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:30.363885   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.363895   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.363904   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.367698   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:30.368173   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:30.368190   44266 pod_ready.go:81] duration metric: took 400.057957ms for pod "kube-controller-manager-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.368215   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.563270   44266 request.go:629] Waited for 194.991431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m02
	I0807 18:30:30.563343   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m02
	I0807 18:30:30.563350   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.563360   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.563365   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.566556   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:30.763133   44266 request.go:629] Waited for 196.018941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:30.763191   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:30.763198   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.763206   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.763217   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.766348   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:30.767082   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:30.767100   44266 pod_ready.go:81] duration metric: took 398.876067ms for pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.767118   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4l79v" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:30.963064   44266 request.go:629] Waited for 195.878143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l79v
	I0807 18:30:30.963131   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l79v
	I0807 18:30:30.963137   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:30.963144   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:30.963151   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:30.966736   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:31.163920   44266 request.go:629] Waited for 196.37962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:31.164005   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:31.164017   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.164028   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.164037   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.168411   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:31.168975   44266 pod_ready.go:92] pod "kube-proxy-4l79v" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:31.168994   44266 pod_ready.go:81] duration metric: took 401.867348ms for pod "kube-proxy-4l79v" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.169006   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5ng2" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.363592   44266 request.go:629] Waited for 194.511545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5ng2
	I0807 18:30:31.363668   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5ng2
	I0807 18:30:31.363675   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.363685   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.363691   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.368028   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:31.563121   44266 request.go:629] Waited for 194.293236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:31.563213   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:31.563223   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.563234   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.563244   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.566830   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:31.567571   44266 pod_ready.go:92] pod "kube-proxy-m5ng2" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:31.567600   44266 pod_ready.go:81] duration metric: took 398.576464ms for pod "kube-proxy-m5ng2" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.567631   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.764083   44266 request.go:629] Waited for 196.35828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246
	I0807 18:30:31.764163   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246
	I0807 18:30:31.764177   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.764191   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.764199   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.767212   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:30:31.963277   44266 request.go:629] Waited for 195.395503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:31.963339   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:30:31.963343   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:31.963350   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:31.963354   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:31.966678   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:31.967366   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:31.967384   44266 pod_ready.go:81] duration metric: took 399.739353ms for pod "kube-scheduler-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:31.967393   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:32.163518   44266 request.go:629] Waited for 196.071536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m02
	I0807 18:30:32.163576   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m02
	I0807 18:30:32.163581   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.163589   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.163593   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.167125   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:32.363259   44266 request.go:629] Waited for 195.352702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:32.363309   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:30:32.363314   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.363325   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.363330   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.366413   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:32.367013   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:30:32.367033   44266 pod_ready.go:81] duration metric: took 399.634584ms for pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:30:32.367043   44266 pod_ready.go:38] duration metric: took 3.199823963s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:30:32.367059   44266 api_server.go:52] waiting for apiserver process to appear ...
	I0807 18:30:32.367111   44266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:30:32.385352   44266 api_server.go:72] duration metric: took 22.095548352s to wait for apiserver process to appear ...
	I0807 18:30:32.385377   44266 api_server.go:88] waiting for apiserver healthz status ...
	I0807 18:30:32.385393   44266 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0807 18:30:32.391376   44266 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0807 18:30:32.391449   44266 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0807 18:30:32.391462   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.391472   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.391483   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.392358   44266 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0807 18:30:32.392431   44266 api_server.go:141] control plane version: v1.30.3
	I0807 18:30:32.392445   44266 api_server.go:131] duration metric: took 7.062347ms to wait for apiserver health ...
	I0807 18:30:32.392452   44266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 18:30:32.563869   44266 request.go:629] Waited for 171.348742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:32.563921   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:32.563931   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.563938   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.563942   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.569072   44266 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0807 18:30:32.573329   44266 system_pods.go:59] 17 kube-system pods found
	I0807 18:30:32.573357   44266 system_pods.go:61] "coredns-7db6d8ff4d-rbnrx" [96fa387b-f93b-40df-9ed6-78834f3d02df] Running
	I0807 18:30:32.573361   44266 system_pods.go:61] "coredns-7db6d8ff4d-w6w6g" [143456ef-ffd1-4d42-b9d0-6b778094eca5] Running
	I0807 18:30:32.573364   44266 system_pods.go:61] "etcd-ha-198246" [861c9809-7151-4564-acae-2ad35ada4196] Running
	I0807 18:30:32.573367   44266 system_pods.go:61] "etcd-ha-198246-m02" [af692dc4-ba35-4226-999d-28fa1a44235c] Running
	I0807 18:30:32.573370   44266 system_pods.go:61] "kindnet-8x6fj" [24dceff9-a78c-47c7-9d36-01fbd62ee362] Running
	I0807 18:30:32.573373   44266 system_pods.go:61] "kindnet-sgl8v" [574aa453-48ef-44ff-b10a-13142fc8cf7f] Running
	I0807 18:30:32.573376   44266 system_pods.go:61] "kube-apiserver-ha-198246" [52e51327-3341-452e-b7bd-95a80adde42f] Running
	I0807 18:30:32.573380   44266 system_pods.go:61] "kube-apiserver-ha-198246-m02" [a983198b-7df1-45bb-bd75-61b345d7397c] Running
	I0807 18:30:32.573383   44266 system_pods.go:61] "kube-controller-manager-ha-198246" [73522726-984c-4c1a-9eb6-c0c2eb896b45] Running
	I0807 18:30:32.573386   44266 system_pods.go:61] "kube-controller-manager-ha-198246-m02" [84840391-d86d-45e5-a4f7-6daabbe16557] Running
	I0807 18:30:32.573390   44266 system_pods.go:61] "kube-proxy-4l79v" [649e12b4-4e77-48a9-af9c-691694c4ec99] Running
	I0807 18:30:32.573393   44266 system_pods.go:61] "kube-proxy-m5ng2" [ed3a0c5c-ff85-48e4-9165-329e89fdb64a] Running
	I0807 18:30:32.573396   44266 system_pods.go:61] "kube-scheduler-ha-198246" [dd45e791-8b98-4d64-8131-c2736463faae] Running
	I0807 18:30:32.573398   44266 system_pods.go:61] "kube-scheduler-ha-198246-m02" [f9571af0-65a0-46eb-98ce-d982fa4a2cce] Running
	I0807 18:30:32.573402   44266 system_pods.go:61] "kube-vip-ha-198246" [a230b27d-cbec-4a1a-a7e7-7192f3de3915] Running
	I0807 18:30:32.573405   44266 system_pods.go:61] "kube-vip-ha-198246-m02" [9ef1c5a2-7829-4937-972d-ce53f60064f8] Running
	I0807 18:30:32.573408   44266 system_pods.go:61] "storage-provisioner" [88457253-9aa8-4bd7-974f-1b47b341d40c] Running
	I0807 18:30:32.573414   44266 system_pods.go:74] duration metric: took 180.956026ms to wait for pod list to return data ...
	I0807 18:30:32.573421   44266 default_sa.go:34] waiting for default service account to be created ...
	I0807 18:30:32.763885   44266 request.go:629] Waited for 190.379686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:30:32.763936   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:30:32.763941   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.763948   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.763954   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.767012   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:30:32.767286   44266 default_sa.go:45] found service account: "default"
	I0807 18:30:32.767313   44266 default_sa.go:55] duration metric: took 193.885113ms for default service account to be created ...
	I0807 18:30:32.767324   44266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 18:30:32.963765   44266 request.go:629] Waited for 196.363852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:32.963831   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:30:32.963837   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:32.963844   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:32.963850   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:32.970431   44266 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0807 18:30:32.975184   44266 system_pods.go:86] 17 kube-system pods found
	I0807 18:30:32.975211   44266 system_pods.go:89] "coredns-7db6d8ff4d-rbnrx" [96fa387b-f93b-40df-9ed6-78834f3d02df] Running
	I0807 18:30:32.975219   44266 system_pods.go:89] "coredns-7db6d8ff4d-w6w6g" [143456ef-ffd1-4d42-b9d0-6b778094eca5] Running
	I0807 18:30:32.975225   44266 system_pods.go:89] "etcd-ha-198246" [861c9809-7151-4564-acae-2ad35ada4196] Running
	I0807 18:30:32.975231   44266 system_pods.go:89] "etcd-ha-198246-m02" [af692dc4-ba35-4226-999d-28fa1a44235c] Running
	I0807 18:30:32.975237   44266 system_pods.go:89] "kindnet-8x6fj" [24dceff9-a78c-47c7-9d36-01fbd62ee362] Running
	I0807 18:30:32.975242   44266 system_pods.go:89] "kindnet-sgl8v" [574aa453-48ef-44ff-b10a-13142fc8cf7f] Running
	I0807 18:30:32.975249   44266 system_pods.go:89] "kube-apiserver-ha-198246" [52e51327-3341-452e-b7bd-95a80adde42f] Running
	I0807 18:30:32.975254   44266 system_pods.go:89] "kube-apiserver-ha-198246-m02" [a983198b-7df1-45bb-bd75-61b345d7397c] Running
	I0807 18:30:32.975261   44266 system_pods.go:89] "kube-controller-manager-ha-198246" [73522726-984c-4c1a-9eb6-c0c2eb896b45] Running
	I0807 18:30:32.975268   44266 system_pods.go:89] "kube-controller-manager-ha-198246-m02" [84840391-d86d-45e5-a4f7-6daabbe16557] Running
	I0807 18:30:32.975277   44266 system_pods.go:89] "kube-proxy-4l79v" [649e12b4-4e77-48a9-af9c-691694c4ec99] Running
	I0807 18:30:32.975284   44266 system_pods.go:89] "kube-proxy-m5ng2" [ed3a0c5c-ff85-48e4-9165-329e89fdb64a] Running
	I0807 18:30:32.975291   44266 system_pods.go:89] "kube-scheduler-ha-198246" [dd45e791-8b98-4d64-8131-c2736463faae] Running
	I0807 18:30:32.975297   44266 system_pods.go:89] "kube-scheduler-ha-198246-m02" [f9571af0-65a0-46eb-98ce-d982fa4a2cce] Running
	I0807 18:30:32.975303   44266 system_pods.go:89] "kube-vip-ha-198246" [a230b27d-cbec-4a1a-a7e7-7192f3de3915] Running
	I0807 18:30:32.975312   44266 system_pods.go:89] "kube-vip-ha-198246-m02" [9ef1c5a2-7829-4937-972d-ce53f60064f8] Running
	I0807 18:30:32.975318   44266 system_pods.go:89] "storage-provisioner" [88457253-9aa8-4bd7-974f-1b47b341d40c] Running
	I0807 18:30:32.975327   44266 system_pods.go:126] duration metric: took 207.996289ms to wait for k8s-apps to be running ...
	I0807 18:30:32.975339   44266 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 18:30:32.975391   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:30:32.989953   44266 system_svc.go:56] duration metric: took 14.606769ms WaitForService to wait for kubelet
	I0807 18:30:32.989979   44266 kubeadm.go:582] duration metric: took 22.700179334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:30:32.989999   44266 node_conditions.go:102] verifying NodePressure condition ...
	I0807 18:30:33.163417   44266 request.go:629] Waited for 173.330443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0807 18:30:33.163468   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0807 18:30:33.163473   44266 round_trippers.go:469] Request Headers:
	I0807 18:30:33.163480   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:30:33.163484   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:30:33.167772   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:30:33.168822   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:30:33.168846   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:30:33.168861   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:30:33.168866   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:30:33.168872   44266 node_conditions.go:105] duration metric: took 178.867475ms to run NodePressure ...
	I0807 18:30:33.168893   44266 start.go:241] waiting for startup goroutines ...
	I0807 18:30:33.168926   44266 start.go:255] writing updated cluster config ...
	I0807 18:30:33.170904   44266 out.go:177] 
	I0807 18:30:33.172264   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:30:33.172352   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:30:33.173860   44266 out.go:177] * Starting "ha-198246-m03" control-plane node in "ha-198246" cluster
	I0807 18:30:33.175358   44266 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:30:33.175380   44266 cache.go:56] Caching tarball of preloaded images
	I0807 18:30:33.175467   44266 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 18:30:33.175477   44266 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 18:30:33.175556   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:30:33.175701   44266 start.go:360] acquireMachinesLock for ha-198246-m03: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:30:33.175740   44266 start.go:364] duration metric: took 21.742µs to acquireMachinesLock for "ha-198246-m03"
	I0807 18:30:33.175759   44266 start.go:93] Provisioning new machine with config: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:30:33.175842   44266 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0807 18:30:33.177325   44266 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 18:30:33.177407   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:30:33.177444   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:30:33.191872   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46759
	I0807 18:30:33.192346   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:30:33.192788   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:30:33.192811   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:30:33.193150   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:30:33.193346   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetMachineName
	I0807 18:30:33.193498   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:30:33.193662   44266 start.go:159] libmachine.API.Create for "ha-198246" (driver="kvm2")
	I0807 18:30:33.193682   44266 client.go:168] LocalClient.Create starting
	I0807 18:30:33.193707   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem
	I0807 18:30:33.193739   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:30:33.193753   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:30:33.193811   44266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem
	I0807 18:30:33.193842   44266 main.go:141] libmachine: Decoding PEM data...
	I0807 18:30:33.193854   44266 main.go:141] libmachine: Parsing certificate...
	I0807 18:30:33.193877   44266 main.go:141] libmachine: Running pre-create checks...
	I0807 18:30:33.193888   44266 main.go:141] libmachine: (ha-198246-m03) Calling .PreCreateCheck
	I0807 18:30:33.194049   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetConfigRaw
	I0807 18:30:33.194488   44266 main.go:141] libmachine: Creating machine...
	I0807 18:30:33.194501   44266 main.go:141] libmachine: (ha-198246-m03) Calling .Create
	I0807 18:30:33.194651   44266 main.go:141] libmachine: (ha-198246-m03) Creating KVM machine...
	I0807 18:30:33.195893   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found existing default KVM network
	I0807 18:30:33.196007   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found existing private KVM network mk-ha-198246
	I0807 18:30:33.196136   44266 main.go:141] libmachine: (ha-198246-m03) Setting up store path in /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03 ...
	I0807 18:30:33.196160   44266 main.go:141] libmachine: (ha-198246-m03) Building disk image from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 18:30:33.196236   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:33.196136   45290 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:30:33.196342   44266 main.go:141] libmachine: (ha-198246-m03) Downloading /home/jenkins/minikube-integration/19389-20864/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 18:30:33.432780   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:33.432647   45290 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa...
	I0807 18:30:33.529287   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:33.529189   45290 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/ha-198246-m03.rawdisk...
	I0807 18:30:33.529318   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Writing magic tar header
	I0807 18:30:33.529332   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Writing SSH key tar header
	I0807 18:30:33.529343   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:33.529299   45290 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03 ...
	I0807 18:30:33.529393   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03
	I0807 18:30:33.529414   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines
	I0807 18:30:33.529433   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03 (perms=drwx------)
	I0807 18:30:33.529447   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:30:33.529464   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864
	I0807 18:30:33.529477   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0807 18:30:33.529492   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines (perms=drwxr-xr-x)
	I0807 18:30:33.529508   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home/jenkins
	I0807 18:30:33.529523   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube (perms=drwxr-xr-x)
	I0807 18:30:33.529538   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Checking permissions on dir: /home
	I0807 18:30:33.529554   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864 (perms=drwxrwxr-x)
	I0807 18:30:33.529567   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0807 18:30:33.529579   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Skipping /home - not owner
	I0807 18:30:33.529595   44266 main.go:141] libmachine: (ha-198246-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0807 18:30:33.529611   44266 main.go:141] libmachine: (ha-198246-m03) Creating domain...
	I0807 18:30:33.530487   44266 main.go:141] libmachine: (ha-198246-m03) define libvirt domain using xml: 
	I0807 18:30:33.530514   44266 main.go:141] libmachine: (ha-198246-m03) <domain type='kvm'>
	I0807 18:30:33.530527   44266 main.go:141] libmachine: (ha-198246-m03)   <name>ha-198246-m03</name>
	I0807 18:30:33.530534   44266 main.go:141] libmachine: (ha-198246-m03)   <memory unit='MiB'>2200</memory>
	I0807 18:30:33.530544   44266 main.go:141] libmachine: (ha-198246-m03)   <vcpu>2</vcpu>
	I0807 18:30:33.530555   44266 main.go:141] libmachine: (ha-198246-m03)   <features>
	I0807 18:30:33.530564   44266 main.go:141] libmachine: (ha-198246-m03)     <acpi/>
	I0807 18:30:33.530574   44266 main.go:141] libmachine: (ha-198246-m03)     <apic/>
	I0807 18:30:33.530582   44266 main.go:141] libmachine: (ha-198246-m03)     <pae/>
	I0807 18:30:33.530593   44266 main.go:141] libmachine: (ha-198246-m03)     
	I0807 18:30:33.530604   44266 main.go:141] libmachine: (ha-198246-m03)   </features>
	I0807 18:30:33.530615   44266 main.go:141] libmachine: (ha-198246-m03)   <cpu mode='host-passthrough'>
	I0807 18:30:33.530622   44266 main.go:141] libmachine: (ha-198246-m03)   
	I0807 18:30:33.530630   44266 main.go:141] libmachine: (ha-198246-m03)   </cpu>
	I0807 18:30:33.530637   44266 main.go:141] libmachine: (ha-198246-m03)   <os>
	I0807 18:30:33.530647   44266 main.go:141] libmachine: (ha-198246-m03)     <type>hvm</type>
	I0807 18:30:33.530659   44266 main.go:141] libmachine: (ha-198246-m03)     <boot dev='cdrom'/>
	I0807 18:30:33.530673   44266 main.go:141] libmachine: (ha-198246-m03)     <boot dev='hd'/>
	I0807 18:30:33.530702   44266 main.go:141] libmachine: (ha-198246-m03)     <bootmenu enable='no'/>
	I0807 18:30:33.530724   44266 main.go:141] libmachine: (ha-198246-m03)   </os>
	I0807 18:30:33.530735   44266 main.go:141] libmachine: (ha-198246-m03)   <devices>
	I0807 18:30:33.530748   44266 main.go:141] libmachine: (ha-198246-m03)     <disk type='file' device='cdrom'>
	I0807 18:30:33.530766   44266 main.go:141] libmachine: (ha-198246-m03)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/boot2docker.iso'/>
	I0807 18:30:33.530778   44266 main.go:141] libmachine: (ha-198246-m03)       <target dev='hdc' bus='scsi'/>
	I0807 18:30:33.530790   44266 main.go:141] libmachine: (ha-198246-m03)       <readonly/>
	I0807 18:30:33.530800   44266 main.go:141] libmachine: (ha-198246-m03)     </disk>
	I0807 18:30:33.530813   44266 main.go:141] libmachine: (ha-198246-m03)     <disk type='file' device='disk'>
	I0807 18:30:33.530826   44266 main.go:141] libmachine: (ha-198246-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0807 18:30:33.530840   44266 main.go:141] libmachine: (ha-198246-m03)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/ha-198246-m03.rawdisk'/>
	I0807 18:30:33.530856   44266 main.go:141] libmachine: (ha-198246-m03)       <target dev='hda' bus='virtio'/>
	I0807 18:30:33.530868   44266 main.go:141] libmachine: (ha-198246-m03)     </disk>
	I0807 18:30:33.530892   44266 main.go:141] libmachine: (ha-198246-m03)     <interface type='network'>
	I0807 18:30:33.530906   44266 main.go:141] libmachine: (ha-198246-m03)       <source network='mk-ha-198246'/>
	I0807 18:30:33.530917   44266 main.go:141] libmachine: (ha-198246-m03)       <model type='virtio'/>
	I0807 18:30:33.530927   44266 main.go:141] libmachine: (ha-198246-m03)     </interface>
	I0807 18:30:33.530938   44266 main.go:141] libmachine: (ha-198246-m03)     <interface type='network'>
	I0807 18:30:33.530952   44266 main.go:141] libmachine: (ha-198246-m03)       <source network='default'/>
	I0807 18:30:33.530963   44266 main.go:141] libmachine: (ha-198246-m03)       <model type='virtio'/>
	I0807 18:30:33.530976   44266 main.go:141] libmachine: (ha-198246-m03)     </interface>
	I0807 18:30:33.530986   44266 main.go:141] libmachine: (ha-198246-m03)     <serial type='pty'>
	I0807 18:30:33.530996   44266 main.go:141] libmachine: (ha-198246-m03)       <target port='0'/>
	I0807 18:30:33.531010   44266 main.go:141] libmachine: (ha-198246-m03)     </serial>
	I0807 18:30:33.531020   44266 main.go:141] libmachine: (ha-198246-m03)     <console type='pty'>
	I0807 18:30:33.531031   44266 main.go:141] libmachine: (ha-198246-m03)       <target type='serial' port='0'/>
	I0807 18:30:33.531043   44266 main.go:141] libmachine: (ha-198246-m03)     </console>
	I0807 18:30:33.531053   44266 main.go:141] libmachine: (ha-198246-m03)     <rng model='virtio'>
	I0807 18:30:33.531067   44266 main.go:141] libmachine: (ha-198246-m03)       <backend model='random'>/dev/random</backend>
	I0807 18:30:33.531078   44266 main.go:141] libmachine: (ha-198246-m03)     </rng>
	I0807 18:30:33.531119   44266 main.go:141] libmachine: (ha-198246-m03)     
	I0807 18:30:33.531138   44266 main.go:141] libmachine: (ha-198246-m03)     
	I0807 18:30:33.531151   44266 main.go:141] libmachine: (ha-198246-m03)   </devices>
	I0807 18:30:33.531165   44266 main.go:141] libmachine: (ha-198246-m03) </domain>
	I0807 18:30:33.531182   44266 main.go:141] libmachine: (ha-198246-m03) 
	I0807 18:30:33.537482   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9f:ab:f5 in network default
	I0807 18:30:33.538090   44266 main.go:141] libmachine: (ha-198246-m03) Ensuring networks are active...
	I0807 18:30:33.538108   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:33.538784   44266 main.go:141] libmachine: (ha-198246-m03) Ensuring network default is active
	I0807 18:30:33.539152   44266 main.go:141] libmachine: (ha-198246-m03) Ensuring network mk-ha-198246 is active
	I0807 18:30:33.539485   44266 main.go:141] libmachine: (ha-198246-m03) Getting domain xml...
	I0807 18:30:33.540252   44266 main.go:141] libmachine: (ha-198246-m03) Creating domain...
	I0807 18:30:34.756035   44266 main.go:141] libmachine: (ha-198246-m03) Waiting to get IP...
	I0807 18:30:34.756939   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:34.757511   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:34.757577   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:34.757477   45290 retry.go:31] will retry after 227.908957ms: waiting for machine to come up
	I0807 18:30:34.986907   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:34.987323   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:34.987354   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:34.987276   45290 retry.go:31] will retry after 246.835339ms: waiting for machine to come up
	I0807 18:30:35.235616   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:35.236094   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:35.236119   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:35.236046   45290 retry.go:31] will retry after 426.907083ms: waiting for machine to come up
	I0807 18:30:35.664761   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:35.665183   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:35.665243   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:35.665182   45290 retry.go:31] will retry after 507.132694ms: waiting for machine to come up
	I0807 18:30:36.173688   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:36.174085   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:36.174115   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:36.174025   45290 retry.go:31] will retry after 466.332078ms: waiting for machine to come up
	I0807 18:30:36.642374   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:36.642869   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:36.642896   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:36.642788   45290 retry.go:31] will retry after 802.371451ms: waiting for machine to come up
	I0807 18:30:37.446742   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:37.447182   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:37.447204   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:37.447149   45290 retry.go:31] will retry after 1.058258348s: waiting for machine to come up
	I0807 18:30:38.506869   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:38.507277   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:38.507303   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:38.507243   45290 retry.go:31] will retry after 1.24813663s: waiting for machine to come up
	I0807 18:30:39.757276   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:39.757679   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:39.757708   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:39.757653   45290 retry.go:31] will retry after 1.347201318s: waiting for machine to come up
	I0807 18:30:41.107002   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:41.107475   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:41.107501   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:41.107433   45290 retry.go:31] will retry after 2.164822694s: waiting for machine to come up
	I0807 18:30:43.273615   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:43.274030   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:43.274053   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:43.274008   45290 retry.go:31] will retry after 2.890209035s: waiting for machine to come up
	I0807 18:30:46.165557   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:46.166122   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:46.166152   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:46.166070   45290 retry.go:31] will retry after 3.463040417s: waiting for machine to come up
	I0807 18:30:49.630676   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:49.631090   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:49.631119   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:49.631053   45290 retry.go:31] will retry after 2.865023491s: waiting for machine to come up
	I0807 18:30:52.497203   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:52.497575   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find current IP address of domain ha-198246-m03 in network mk-ha-198246
	I0807 18:30:52.497598   44266 main.go:141] libmachine: (ha-198246-m03) DBG | I0807 18:30:52.497535   45290 retry.go:31] will retry after 4.944323257s: waiting for machine to come up
	I0807 18:30:57.446295   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:57.446732   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has current primary IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:57.446753   44266 main.go:141] libmachine: (ha-198246-m03) Found IP for machine: 192.168.39.227
	I0807 18:30:57.446766   44266 main.go:141] libmachine: (ha-198246-m03) Reserving static IP address...
	I0807 18:30:57.447262   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find host DHCP lease matching {name: "ha-198246-m03", mac: "52:54:00:9d:24:52", ip: "192.168.39.227"} in network mk-ha-198246
	I0807 18:30:57.521164   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Getting to WaitForSSH function...
	I0807 18:30:57.521190   44266 main.go:141] libmachine: (ha-198246-m03) Reserved static IP address: 192.168.39.227
	I0807 18:30:57.521199   44266 main.go:141] libmachine: (ha-198246-m03) Waiting for SSH to be available...
	I0807 18:30:57.523681   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:30:57.524059   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246
	I0807 18:30:57.524105   44266 main.go:141] libmachine: (ha-198246-m03) DBG | unable to find defined IP address of network mk-ha-198246 interface with MAC address 52:54:00:9d:24:52
	I0807 18:30:57.524328   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using SSH client type: external
	I0807 18:30:57.524353   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa (-rw-------)
	I0807 18:30:57.524381   44266 main.go:141] libmachine: (ha-198246-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:30:57.524422   44266 main.go:141] libmachine: (ha-198246-m03) DBG | About to run SSH command:
	I0807 18:30:57.524444   44266 main.go:141] libmachine: (ha-198246-m03) DBG | exit 0
	I0807 18:30:57.529188   44266 main.go:141] libmachine: (ha-198246-m03) DBG | SSH cmd err, output: exit status 255: 
	I0807 18:30:57.529209   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0807 18:30:57.529217   44266 main.go:141] libmachine: (ha-198246-m03) DBG | command : exit 0
	I0807 18:30:57.529223   44266 main.go:141] libmachine: (ha-198246-m03) DBG | err     : exit status 255
	I0807 18:30:57.529230   44266 main.go:141] libmachine: (ha-198246-m03) DBG | output  : 
	I0807 18:31:00.531629   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Getting to WaitForSSH function...
	I0807 18:31:00.534035   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.534413   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:00.534441   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.534511   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using SSH client type: external
	I0807 18:31:00.534527   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa (-rw-------)
	I0807 18:31:00.534578   44266 main.go:141] libmachine: (ha-198246-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 18:31:00.534598   44266 main.go:141] libmachine: (ha-198246-m03) DBG | About to run SSH command:
	I0807 18:31:00.534623   44266 main.go:141] libmachine: (ha-198246-m03) DBG | exit 0
	I0807 18:31:00.664624   44266 main.go:141] libmachine: (ha-198246-m03) DBG | SSH cmd err, output: <nil>: 
	I0807 18:31:00.664910   44266 main.go:141] libmachine: (ha-198246-m03) KVM machine creation complete!
	I0807 18:31:00.665347   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetConfigRaw
	I0807 18:31:00.665908   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:00.666128   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:00.666310   44266 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 18:31:00.666326   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:31:00.667883   44266 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 18:31:00.667900   44266 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 18:31:00.667908   44266 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 18:31:00.667916   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:00.670520   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.671001   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:00.671032   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.671175   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:00.671364   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.671513   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.671630   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:00.671786   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:00.671980   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:00.671990   44266 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 18:31:00.787597   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:31:00.787623   44266 main.go:141] libmachine: Detecting the provisioner...
	I0807 18:31:00.787633   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:00.790865   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.791362   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:00.791388   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.791510   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:00.791714   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.791937   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.792190   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:00.792379   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:00.792539   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:00.792549   44266 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 18:31:00.909345   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 18:31:00.909405   44266 main.go:141] libmachine: found compatible host: buildroot
	I0807 18:31:00.909414   44266 main.go:141] libmachine: Provisioning with buildroot...
	I0807 18:31:00.909421   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetMachineName
	I0807 18:31:00.909684   44266 buildroot.go:166] provisioning hostname "ha-198246-m03"
	I0807 18:31:00.909709   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetMachineName
	I0807 18:31:00.909928   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:00.913329   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.913773   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:00.913798   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:00.913978   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:00.914169   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.914339   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:00.914512   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:00.914692   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:00.914895   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:00.914915   44266 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198246-m03 && echo "ha-198246-m03" | sudo tee /etc/hostname
	I0807 18:31:01.046391   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246-m03
	
	I0807 18:31:01.046419   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.049459   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.049924   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.049953   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.050088   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.050268   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.050448   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.050586   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.050755   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:01.050909   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:01.050924   44266 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198246-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198246-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198246-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:31:01.178381   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:31:01.178417   44266 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 18:31:01.178436   44266 buildroot.go:174] setting up certificates
	I0807 18:31:01.178447   44266 provision.go:84] configureAuth start
	I0807 18:31:01.178459   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetMachineName
	I0807 18:31:01.178749   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:31:01.181683   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.182031   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.182058   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.182247   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.184746   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.185072   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.185101   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.185229   44266 provision.go:143] copyHostCerts
	I0807 18:31:01.185260   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:31:01.185298   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 18:31:01.185309   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:31:01.185381   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 18:31:01.185480   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:31:01.185505   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 18:31:01.185514   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:31:01.185554   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 18:31:01.185619   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:31:01.185643   44266 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 18:31:01.185648   44266 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:31:01.185683   44266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 18:31:01.185753   44266 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.ha-198246-m03 san=[127.0.0.1 192.168.39.227 ha-198246-m03 localhost minikube]
	I0807 18:31:01.354582   44266 provision.go:177] copyRemoteCerts
	I0807 18:31:01.354653   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:31:01.354683   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.357461   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.357784   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.357817   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.358072   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.358268   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.358436   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.358560   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:31:01.447576   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 18:31:01.447656   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:31:01.475031   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 18:31:01.475102   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 18:31:01.501202   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 18:31:01.501289   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 18:31:01.528456   44266 provision.go:87] duration metric: took 349.995722ms to configureAuth
	I0807 18:31:01.528486   44266 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:31:01.528699   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:31:01.528777   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.531665   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.532012   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.532042   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.532225   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.532423   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.532595   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.532702   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.532873   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:01.533031   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:01.533047   44266 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 18:31:01.817075   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 18:31:01.817099   44266 main.go:141] libmachine: Checking connection to Docker...
	I0807 18:31:01.817118   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetURL
	I0807 18:31:01.818384   44266 main.go:141] libmachine: (ha-198246-m03) DBG | Using libvirt version 6000000
	I0807 18:31:01.821056   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.821418   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.821439   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.821580   44266 main.go:141] libmachine: Docker is up and running!
	I0807 18:31:01.821595   44266 main.go:141] libmachine: Reticulating splines...
	I0807 18:31:01.821603   44266 client.go:171] duration metric: took 28.627914411s to LocalClient.Create
	I0807 18:31:01.821631   44266 start.go:167] duration metric: took 28.627967701s to libmachine.API.Create "ha-198246"
	I0807 18:31:01.821643   44266 start.go:293] postStartSetup for "ha-198246-m03" (driver="kvm2")
	I0807 18:31:01.821659   44266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:31:01.821698   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:01.821917   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:31:01.821940   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.824112   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.824469   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.824488   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.824623   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.824800   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.824973   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.825155   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:31:01.915999   44266 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:31:01.920422   44266 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:31:01.920443   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 18:31:01.920514   44266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 18:31:01.920605   44266 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 18:31:01.920618   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 18:31:01.920730   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:31:01.931294   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:31:01.955945   44266 start.go:296] duration metric: took 134.285824ms for postStartSetup
	I0807 18:31:01.956001   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetConfigRaw
	I0807 18:31:01.956611   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:31:01.959322   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.959688   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.959723   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.959995   44266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:31:01.960180   44266 start.go:128] duration metric: took 28.784328806s to createHost
	I0807 18:31:01.960233   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:01.962214   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.962551   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:01.962579   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:01.962733   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:01.962916   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.963080   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:01.963211   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:01.963362   44266 main.go:141] libmachine: Using SSH client type: native
	I0807 18:31:01.963518   44266 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0807 18:31:01.963528   44266 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:31:02.077437   44266 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723055462.057555331
	
	I0807 18:31:02.077460   44266 fix.go:216] guest clock: 1723055462.057555331
	I0807 18:31:02.077470   44266 fix.go:229] Guest: 2024-08-07 18:31:02.057555331 +0000 UTC Remote: 2024-08-07 18:31:01.960191536 +0000 UTC m=+220.271212198 (delta=97.363795ms)
	I0807 18:31:02.077490   44266 fix.go:200] guest clock delta is within tolerance: 97.363795ms
	I0807 18:31:02.077497   44266 start.go:83] releasing machines lock for "ha-198246-m03", held for 28.901748397s
	I0807 18:31:02.077520   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:02.077788   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:31:02.081280   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.081885   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:02.081913   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.084121   44266 out.go:177] * Found network options:
	I0807 18:31:02.085422   44266 out.go:177]   - NO_PROXY=192.168.39.196,192.168.39.251
	W0807 18:31:02.086688   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:31:02.086711   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:31:02.086726   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:02.087351   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:02.087542   44266 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:31:02.087647   44266 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 18:31:02.087697   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	W0807 18:31:02.087728   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	W0807 18:31:02.087754   44266 proxy.go:119] fail to check proxy env: Error ip not in block
	I0807 18:31:02.087831   44266 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 18:31:02.087877   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:31:02.090758   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.090950   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.091267   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:02.091288   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.091311   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:02.091327   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:02.091450   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:02.091624   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:31:02.091638   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:02.091819   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:31:02.091826   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:02.091982   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:31:02.091975   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:31:02.092120   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:31:02.330635   44266 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:31:02.338200   44266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:31:02.338275   44266 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:31:02.355776   44266 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 18:31:02.355798   44266 start.go:495] detecting cgroup driver to use...
	I0807 18:31:02.355869   44266 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:31:02.373960   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:31:02.388788   44266 docker.go:217] disabling cri-docker service (if available) ...
	I0807 18:31:02.388863   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 18:31:02.402456   44266 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 18:31:02.415862   44266 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 18:31:02.528910   44266 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 18:31:02.692177   44266 docker.go:233] disabling docker service ...
	I0807 18:31:02.692260   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 18:31:02.708366   44266 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 18:31:02.722150   44266 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 18:31:02.842254   44266 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 18:31:02.963283   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 18:31:02.979860   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:31:03.000776   44266 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 18:31:03.000833   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.012949   44266 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 18:31:03.013019   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.025364   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.037815   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.050150   44266 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:31:03.062786   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.074694   44266 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.094223   44266 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:31:03.106816   44266 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:31:03.117233   44266 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 18:31:03.117281   44266 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 18:31:03.130652   44266 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:31:03.140978   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:31:03.261390   44266 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 18:31:03.415655   44266 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 18:31:03.415731   44266 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 18:31:03.420847   44266 start.go:563] Will wait 60s for crictl version
	I0807 18:31:03.420894   44266 ssh_runner.go:195] Run: which crictl
	I0807 18:31:03.424888   44266 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:31:03.466634   44266 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 18:31:03.466722   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:31:03.495718   44266 ssh_runner.go:195] Run: crio --version
	I0807 18:31:03.666880   44266 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 18:31:03.742658   44266 out.go:177]   - env NO_PROXY=192.168.39.196
	I0807 18:31:03.816001   44266 out.go:177]   - env NO_PROXY=192.168.39.196,192.168.39.251
	I0807 18:31:03.888374   44266 main.go:141] libmachine: (ha-198246-m03) Calling .GetIP
	I0807 18:31:03.891307   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:03.891715   44266 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:31:03.891745   44266 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:31:03.891998   44266 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 18:31:03.896652   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:31:03.912117   44266 mustload.go:65] Loading cluster: ha-198246
	I0807 18:31:03.912501   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:31:03.912897   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:31:03.912950   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:31:03.928344   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43499
	I0807 18:31:03.928736   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:31:03.929306   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:31:03.929334   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:31:03.929692   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:31:03.929888   44266 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:31:03.931789   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:31:03.932081   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:31:03.932119   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:31:03.947851   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33021
	I0807 18:31:03.948291   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:31:03.948876   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:31:03.948893   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:31:03.949204   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:31:03.949455   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:31:03.949625   44266 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246 for IP: 192.168.39.227
	I0807 18:31:03.949636   44266 certs.go:194] generating shared ca certs ...
	I0807 18:31:03.949650   44266 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:03.949763   44266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 18:31:03.949804   44266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 18:31:03.949809   44266 certs.go:256] generating profile certs ...
	I0807 18:31:03.949874   44266 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key
	I0807 18:31:03.949895   44266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.1af9f5f5
	I0807 18:31:03.949910   44266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.1af9f5f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.251 192.168.39.227 192.168.39.254]
	I0807 18:31:04.235062   44266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.1af9f5f5 ...
	I0807 18:31:04.235104   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.1af9f5f5: {Name:mkc9ab09dfcc0a08e4cded1def253097d11345ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:04.235325   44266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.1af9f5f5 ...
	I0807 18:31:04.235345   44266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.1af9f5f5: {Name:mk706ab9d0d4064858493bbf1c933d49d1f0fd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:31:04.235444   44266 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.1af9f5f5 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt
	I0807 18:31:04.284244   44266 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.1af9f5f5 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key
	I0807 18:31:04.284561   44266 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key
	I0807 18:31:04.284585   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:31:04.284607   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:31:04.284635   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:31:04.284654   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:31:04.284671   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:31:04.284704   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:31:04.284726   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:31:04.284747   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:31:04.284824   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 18:31:04.284871   44266 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 18:31:04.284888   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 18:31:04.284977   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 18:31:04.285053   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 18:31:04.285089   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 18:31:04.285156   44266 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:31:04.285203   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 18:31:04.285227   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 18:31:04.285244   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:31:04.285288   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:31:04.288899   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:31:04.289445   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:31:04.289477   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:31:04.289641   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:31:04.289875   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:31:04.290047   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:31:04.290209   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:31:04.368646   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0807 18:31:04.375791   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0807 18:31:04.387726   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0807 18:31:04.392669   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0807 18:31:04.404818   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0807 18:31:04.409538   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0807 18:31:04.423952   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0807 18:31:04.429946   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0807 18:31:04.442196   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0807 18:31:04.447075   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0807 18:31:04.467205   44266 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0807 18:31:04.472136   44266 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0807 18:31:04.484789   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:31:04.513657   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:31:04.541568   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:31:04.570650   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 18:31:04.599209   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0807 18:31:04.624315   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 18:31:04.649418   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:31:04.674771   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 18:31:04.701297   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 18:31:04.728656   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 18:31:04.756136   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:31:04.783116   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0807 18:31:04.800682   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0807 18:31:04.818998   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0807 18:31:04.836194   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0807 18:31:04.854131   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0807 18:31:04.871939   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0807 18:31:04.888443   44266 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0807 18:31:04.905275   44266 ssh_runner.go:195] Run: openssl version
	I0807 18:31:04.911814   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 18:31:04.922949   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 18:31:04.927578   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 18:31:04.927640   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 18:31:04.934032   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 18:31:04.945014   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 18:31:04.957480   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 18:31:04.962404   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 18:31:04.962459   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 18:31:04.968351   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:31:04.980460   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:31:04.992337   44266 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:31:04.997356   44266 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:31:04.997422   44266 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:31:05.003783   44266 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:31:05.015178   44266 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:31:05.019430   44266 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 18:31:05.019490   44266 kubeadm.go:934] updating node {m03 192.168.39.227 8443 v1.30.3 crio true true} ...
	I0807 18:31:05.019580   44266 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198246-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:31:05.019607   44266 kube-vip.go:115] generating kube-vip config ...
	I0807 18:31:05.019640   44266 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:31:05.036848   44266 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:31:05.036914   44266 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:31:05.036972   44266 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:31:05.047848   44266 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0807 18:31:05.047893   44266 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0807 18:31:05.058827   44266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0807 18:31:05.058853   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:31:05.058935   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0807 18:31:05.058827   44266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0807 18:31:05.058826   44266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0807 18:31:05.059037   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:31:05.059054   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:31:05.059162   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0807 18:31:05.063740   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0807 18:31:05.063770   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0807 18:31:05.093775   44266 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:31:05.093820   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0807 18:31:05.093855   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0807 18:31:05.093875   44266 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0807 18:31:05.147621   44266 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0807 18:31:05.147679   44266 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0807 18:31:06.022179   44266 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0807 18:31:06.032561   44266 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0807 18:31:06.051718   44266 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:31:06.069963   44266 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:31:06.088103   44266 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:31:06.092277   44266 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 18:31:06.105287   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:31:06.220917   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:31:06.238937   44266 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:31:06.239328   44266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:31:06.239375   44266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:31:06.258371   44266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0807 18:31:06.258888   44266 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:31:06.259464   44266 main.go:141] libmachine: Using API Version  1
	I0807 18:31:06.259488   44266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:31:06.259882   44266 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:31:06.260092   44266 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:31:06.260264   44266 start.go:317] joinCluster: &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:31:06.260379   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0807 18:31:06.260399   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:31:06.263930   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:31:06.264431   44266 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:31:06.264458   44266 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:31:06.264644   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:31:06.264810   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:31:06.264929   44266 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:31:06.265035   44266 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:31:06.435193   44266 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:31:06.435239   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token od1f23.v6j6x0epna3a85qa --discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-198246-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443"
	I0807 18:31:30.206281   44266 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token od1f23.v6j6x0epna3a85qa --discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-198246-m03 --control-plane --apiserver-advertise-address=192.168.39.227 --apiserver-bind-port=8443": (23.77100816s)
	I0807 18:31:30.206317   44266 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0807 18:31:30.813324   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-198246-m03 minikube.k8s.io/updated_at=2024_08_07T18_31_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=ha-198246 minikube.k8s.io/primary=false
	I0807 18:31:30.964365   44266 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-198246-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0807 18:31:31.090417   44266 start.go:319] duration metric: took 24.830149142s to joinCluster
	I0807 18:31:31.090498   44266 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 18:31:31.090781   44266 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:31:31.093184   44266 out.go:177] * Verifying Kubernetes components...
	I0807 18:31:31.094437   44266 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:31:31.342260   44266 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:31:31.362745   44266 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:31:31.363071   44266 kapi.go:59] client config for ha-198246: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.crt", KeyFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key", CAFile:"/home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0807 18:31:31.363166   44266 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.196:8443
	I0807 18:31:31.363437   44266 node_ready.go:35] waiting up to 6m0s for node "ha-198246-m03" to be "Ready" ...
	I0807 18:31:31.363528   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:31.363541   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:31.363551   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:31.363556   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:31.367408   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:31.864633   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:31.864676   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:31.864702   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:31.864711   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:31.868168   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:32.363859   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:32.363895   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:32.363903   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:32.363908   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:32.367827   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:32.863813   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:32.863834   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:32.863841   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:32.863846   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:32.867002   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:33.363594   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:33.363616   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:33.363625   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:33.363631   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:33.368287   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:33.369938   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:33.864014   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:33.864035   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:33.864043   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:33.864050   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:33.868446   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:34.364544   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:34.364563   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:34.364568   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:34.364571   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:34.368487   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:34.863667   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:34.863695   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:34.863705   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:34.863711   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:34.867251   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:35.364368   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:35.364391   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:35.364397   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:35.364405   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:35.368606   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:35.370093   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:35.864081   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:35.864108   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:35.864120   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:35.864126   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:35.867805   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:36.363814   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:36.363838   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:36.363848   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:36.363854   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:36.367626   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:36.863972   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:36.863992   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:36.864000   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:36.864004   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:36.867776   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:37.363945   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:37.363966   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:37.363974   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:37.363977   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:37.367672   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:37.864665   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:37.864704   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:37.864712   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:37.864715   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:37.868330   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:37.869986   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:38.363639   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:38.363660   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:38.363668   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:38.363672   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:38.367008   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:38.863892   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:38.863919   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:38.863931   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:38.863935   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:38.872605   44266 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:31:39.364337   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:39.364368   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:39.364375   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:39.364379   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:39.367983   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:39.863967   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:39.863990   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:39.863999   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:39.864003   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:39.867134   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:40.363638   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:40.363664   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:40.363675   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:40.363680   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:40.367384   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:40.368391   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:40.863639   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:40.863658   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:40.863665   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:40.863669   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:40.866980   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:41.364633   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:41.364655   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:41.364665   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:41.364671   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:41.368280   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:41.864268   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:41.864288   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:41.864297   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:41.864301   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:41.868013   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:42.364521   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:42.364546   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:42.364557   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:42.364564   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:42.367904   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:42.368754   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:42.864040   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:42.864061   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:42.864069   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:42.864073   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:42.867582   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:43.363930   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:43.363950   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:43.363958   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:43.363961   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:43.368329   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:43.864004   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:43.864030   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:43.864042   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:43.864054   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:43.868118   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:44.364372   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:44.364399   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:44.364411   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:44.364416   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:44.367990   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:44.863922   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:44.863945   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:44.863957   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:44.863965   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:44.868231   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:44.869027   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:45.364338   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:45.364359   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:45.364367   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:45.364372   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:45.368558   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:45.863928   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:45.863949   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:45.863957   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:45.863962   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:45.867520   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:46.363984   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:46.364009   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:46.364017   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:46.364022   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:46.367693   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:46.864611   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:46.864635   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:46.864643   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:46.864647   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:46.868195   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:46.869168   44266 node_ready.go:53] node "ha-198246-m03" has status "Ready":"False"
	I0807 18:31:47.363985   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:47.364006   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:47.364014   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:47.364018   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:47.367513   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:47.863715   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:47.863735   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:47.863743   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:47.863748   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:47.866941   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.364283   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:48.364304   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.364311   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.364315   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.368301   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.864297   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:48.864317   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.864326   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.864332   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.867691   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.868357   44266 node_ready.go:49] node "ha-198246-m03" has status "Ready":"True"
	I0807 18:31:48.868374   44266 node_ready.go:38] duration metric: took 17.504916336s for node "ha-198246-m03" to be "Ready" ...
	I0807 18:31:48.868382   44266 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:31:48.868439   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:48.868447   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.868454   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.868458   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.875973   44266 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:31:48.882318   44266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.882408   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rbnrx
	I0807 18:31:48.882420   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.882431   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.882444   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.885507   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.886130   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:48.886147   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.886156   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.886162   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.888994   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.889510   44266 pod_ready.go:92] pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.889528   44266 pod_ready.go:81] duration metric: took 7.186047ms for pod "coredns-7db6d8ff4d-rbnrx" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.889537   44266 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.889582   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-w6w6g
	I0807 18:31:48.889589   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.889596   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.889601   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.893021   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:48.894159   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:48.894181   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.894188   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.894192   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.896425   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.896893   44266 pod_ready.go:92] pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.896909   44266 pod_ready.go:81] duration metric: took 7.366231ms for pod "coredns-7db6d8ff4d-w6w6g" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.896917   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.896961   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246
	I0807 18:31:48.896967   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.896975   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.896982   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.899237   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.899953   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:48.899970   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.899978   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.899983   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.902186   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.902691   44266 pod_ready.go:92] pod "etcd-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.902715   44266 pod_ready.go:81] duration metric: took 5.790956ms for pod "etcd-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.902726   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.902784   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246-m02
	I0807 18:31:48.902795   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.902803   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.902814   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.905329   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.905806   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:48.905821   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:48.905828   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:48.905832   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:48.908047   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:48.908655   44266 pod_ready.go:92] pod "etcd-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:48.908670   44266 pod_ready.go:81] duration metric: took 5.936535ms for pod "etcd-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:48.908678   44266 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.064661   44266 request.go:629] Waited for 155.923893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246-m03
	I0807 18:31:49.064753   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246-m03
	I0807 18:31:49.064759   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.064764   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.064772   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.068282   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:49.265339   44266 request.go:629] Waited for 196.371663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:49.265425   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:49.265438   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.265449   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.265456   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.268957   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:49.269555   44266 pod_ready.go:92] pod "etcd-ha-198246-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:49.269571   44266 pod_ready.go:81] duration metric: took 360.885615ms for pod "etcd-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.269587   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.464819   44266 request.go:629] Waited for 195.162513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246
	I0807 18:31:49.464903   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246
	I0807 18:31:49.464909   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.464916   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.464921   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.469362   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:49.664800   44266 request.go:629] Waited for 194.369823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:49.664876   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:49.664881   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.664887   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.664909   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.668254   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:49.668909   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:49.668928   44266 pod_ready.go:81] duration metric: took 399.332717ms for pod "kube-apiserver-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.668937   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:49.864884   44266 request.go:629] Waited for 195.895244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m02
	I0807 18:31:49.864939   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m02
	I0807 18:31:49.864944   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:49.864964   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:49.864968   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:49.868343   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.065388   44266 request.go:629] Waited for 196.362909ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:50.065438   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:50.065443   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.065450   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.065455   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.069435   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.070338   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:50.070357   44266 pod_ready.go:81] duration metric: took 401.414954ms for pod "kube-apiserver-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.070367   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.264445   44266 request.go:629] Waited for 194.01249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m03
	I0807 18:31:50.264517   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-198246-m03
	I0807 18:31:50.264525   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.264534   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.264540   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.268180   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.465319   44266 request.go:629] Waited for 196.408254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:50.465387   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:50.465391   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.465398   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.465403   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.468707   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.469431   44266 pod_ready.go:92] pod "kube-apiserver-ha-198246-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:50.469449   44266 pod_ready.go:81] duration metric: took 399.076161ms for pod "kube-apiserver-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.469459   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.664732   44266 request.go:629] Waited for 195.186866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246
	I0807 18:31:50.664805   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246
	I0807 18:31:50.664816   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.664827   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.664835   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.668528   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:50.864795   44266 request.go:629] Waited for 195.34558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:50.864864   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:50.864871   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:50.864880   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:50.864888   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:50.867688   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:50.868601   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:50.868620   44266 pod_ready.go:81] duration metric: took 399.154742ms for pod "kube-controller-manager-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:50.868630   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.064678   44266 request.go:629] Waited for 195.987732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m02
	I0807 18:31:51.064754   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m02
	I0807 18:31:51.064761   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.064772   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.064783   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.068355   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:51.265387   44266 request.go:629] Waited for 196.386347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:51.265453   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:51.265460   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.265471   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.265480   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.269137   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:51.269661   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:51.269679   44266 pod_ready.go:81] duration metric: took 401.043609ms for pod "kube-controller-manager-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.269689   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.465093   44266 request.go:629] Waited for 195.339663ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m03
	I0807 18:31:51.465157   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246-m03
	I0807 18:31:51.465165   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.465174   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.465179   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.468791   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:51.664927   44266 request.go:629] Waited for 195.372605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:51.664995   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:51.665006   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.665017   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.665027   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.668549   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:51.669363   44266 pod_ready.go:92] pod "kube-controller-manager-ha-198246-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:51.669381   44266 pod_ready.go:81] duration metric: took 399.686225ms for pod "kube-controller-manager-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.669390   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4l79v" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:51.865256   44266 request.go:629] Waited for 195.79115ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l79v
	I0807 18:31:51.865313   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4l79v
	I0807 18:31:51.865320   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:51.865329   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:51.865334   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:51.873470   44266 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:31:52.064458   44266 request.go:629] Waited for 190.295419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:52.064521   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:52.064526   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.064533   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.064538   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.067938   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:52.068805   44266 pod_ready.go:92] pod "kube-proxy-4l79v" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:52.068837   44266 pod_ready.go:81] duration metric: took 399.436427ms for pod "kube-proxy-4l79v" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.068851   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7mttr" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.264784   44266 request.go:629] Waited for 195.867102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7mttr
	I0807 18:31:52.264838   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7mttr
	I0807 18:31:52.264843   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.264849   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.264852   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.269765   44266 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0807 18:31:52.464903   44266 request.go:629] Waited for 194.439324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:52.464972   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:52.464983   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.464993   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.465002   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.468248   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:52.468752   44266 pod_ready.go:92] pod "kube-proxy-7mttr" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:52.468774   44266 pod_ready.go:81] duration metric: took 399.914652ms for pod "kube-proxy-7mttr" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.468783   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m5ng2" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.664867   44266 request.go:629] Waited for 196.022855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5ng2
	I0807 18:31:52.664951   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-proxy-m5ng2
	I0807 18:31:52.664959   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.664973   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.664988   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.668228   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:52.865340   44266 request.go:629] Waited for 196.363915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:52.865394   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:52.865399   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:52.865406   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:52.865411   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:52.868878   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:52.869535   44266 pod_ready.go:92] pod "kube-proxy-m5ng2" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:52.869556   44266 pod_ready.go:81] duration metric: took 400.766778ms for pod "kube-proxy-m5ng2" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:52.869565   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.064548   44266 request.go:629] Waited for 194.920878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246
	I0807 18:31:53.064617   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246
	I0807 18:31:53.064625   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.064633   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.064640   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.068146   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:53.265207   44266 request.go:629] Waited for 196.43783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:53.265255   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246
	I0807 18:31:53.265260   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.265267   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.265272   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.268523   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:53.269186   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:53.269204   44266 pod_ready.go:81] duration metric: took 399.633139ms for pod "kube-scheduler-ha-198246" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.269217   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.465360   44266 request.go:629] Waited for 196.088508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m02
	I0807 18:31:53.465413   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m02
	I0807 18:31:53.465418   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.465433   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.465450   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.468768   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:53.664761   44266 request.go:629] Waited for 195.371572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:53.664812   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m02
	I0807 18:31:53.664817   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.664824   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.664827   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.668421   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:53.669073   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:53.669096   44266 pod_ready.go:81] duration metric: took 399.871721ms for pod "kube-scheduler-ha-198246-m02" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.669110   44266 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:53.865211   44266 request.go:629] Waited for 196.027374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m03
	I0807 18:31:53.865290   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-198246-m03
	I0807 18:31:53.865298   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:53.865305   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:53.865314   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:53.868661   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:54.064951   44266 request.go:629] Waited for 195.756654ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:54.065010   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes/ha-198246-m03
	I0807 18:31:54.065018   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.065027   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.065032   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.068111   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:54.068802   44266 pod_ready.go:92] pod "kube-scheduler-ha-198246-m03" in "kube-system" namespace has status "Ready":"True"
	I0807 18:31:54.068820   44266 pod_ready.go:81] duration metric: took 399.702974ms for pod "kube-scheduler-ha-198246-m03" in "kube-system" namespace to be "Ready" ...
	I0807 18:31:54.068830   44266 pod_ready.go:38] duration metric: took 5.200435833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 18:31:54.068843   44266 api_server.go:52] waiting for apiserver process to appear ...
	I0807 18:31:54.068887   44266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:31:54.084598   44266 api_server.go:72] duration metric: took 22.994065627s to wait for apiserver process to appear ...
	I0807 18:31:54.084621   44266 api_server.go:88] waiting for apiserver healthz status ...
	I0807 18:31:54.084641   44266 api_server.go:253] Checking apiserver healthz at https://192.168.39.196:8443/healthz ...
	I0807 18:31:54.090716   44266 api_server.go:279] https://192.168.39.196:8443/healthz returned 200:
	ok
	I0807 18:31:54.090787   44266 round_trippers.go:463] GET https://192.168.39.196:8443/version
	I0807 18:31:54.090798   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.090908   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.090933   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.091732   44266 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0807 18:31:54.091793   44266 api_server.go:141] control plane version: v1.30.3
	I0807 18:31:54.091810   44266 api_server.go:131] duration metric: took 7.181714ms to wait for apiserver health ...
	I0807 18:31:54.091828   44266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0807 18:31:54.264554   44266 request.go:629] Waited for 172.642251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:54.264604   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:54.264611   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.264621   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.264626   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.272067   44266 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0807 18:31:54.279487   44266 system_pods.go:59] 24 kube-system pods found
	I0807 18:31:54.279517   44266 system_pods.go:61] "coredns-7db6d8ff4d-rbnrx" [96fa387b-f93b-40df-9ed6-78834f3d02df] Running
	I0807 18:31:54.279526   44266 system_pods.go:61] "coredns-7db6d8ff4d-w6w6g" [143456ef-ffd1-4d42-b9d0-6b778094eca5] Running
	I0807 18:31:54.279532   44266 system_pods.go:61] "etcd-ha-198246" [861c9809-7151-4564-acae-2ad35ada4196] Running
	I0807 18:31:54.279537   44266 system_pods.go:61] "etcd-ha-198246-m02" [af692dc4-ba35-4226-999d-28fa1a44235c] Running
	I0807 18:31:54.279542   44266 system_pods.go:61] "etcd-ha-198246-m03" [8df491af-6c48-41d6-873f-c1c39afac2f8] Running
	I0807 18:31:54.279547   44266 system_pods.go:61] "kindnet-7854s" [f87d6292-b9b6-4f63-912c-9dfda0471e2e] Running
	I0807 18:31:54.279552   44266 system_pods.go:61] "kindnet-8x6fj" [24dceff9-a78c-47c7-9d36-01fbd62ee362] Running
	I0807 18:31:54.279556   44266 system_pods.go:61] "kindnet-sgl8v" [574aa453-48ef-44ff-b10a-13142fc8cf7f] Running
	I0807 18:31:54.279562   44266 system_pods.go:61] "kube-apiserver-ha-198246" [52e51327-3341-452e-b7bd-95a80adde42f] Running
	I0807 18:31:54.279567   44266 system_pods.go:61] "kube-apiserver-ha-198246-m02" [a983198b-7df1-45bb-bd75-61b345d7397c] Running
	I0807 18:31:54.279573   44266 system_pods.go:61] "kube-apiserver-ha-198246-m03" [c589756a-dda8-44a8-82bb-60532e74eb8b] Running
	I0807 18:31:54.279581   44266 system_pods.go:61] "kube-controller-manager-ha-198246" [73522726-984c-4c1a-9eb6-c0c2eb896b45] Running
	I0807 18:31:54.279587   44266 system_pods.go:61] "kube-controller-manager-ha-198246-m02" [84840391-d86d-45e5-a4f7-6daabbe16557] Running
	I0807 18:31:54.279592   44266 system_pods.go:61] "kube-controller-manager-ha-198246-m03" [5e0d97af-b071-4467-8c3a-dc71f904e84c] Running
	I0807 18:31:54.279597   44266 system_pods.go:61] "kube-proxy-4l79v" [649e12b4-4e77-48a9-af9c-691694c4ec99] Running
	I0807 18:31:54.279602   44266 system_pods.go:61] "kube-proxy-7mttr" [7cb96f6e-47a5-4d6c-a80e-77df1eafc970] Running
	I0807 18:31:54.279608   44266 system_pods.go:61] "kube-proxy-m5ng2" [ed3a0c5c-ff85-48e4-9165-329e89fdb64a] Running
	I0807 18:31:54.279616   44266 system_pods.go:61] "kube-scheduler-ha-198246" [dd45e791-8b98-4d64-8131-c2736463faae] Running
	I0807 18:31:54.279621   44266 system_pods.go:61] "kube-scheduler-ha-198246-m02" [f9571af0-65a0-46eb-98ce-d982fa4a2cce] Running
	I0807 18:31:54.279626   44266 system_pods.go:61] "kube-scheduler-ha-198246-m03" [5fe100c3-b0a4-4499-a7e2-330c88ee8162] Running
	I0807 18:31:54.279633   44266 system_pods.go:61] "kube-vip-ha-198246" [a230b27d-cbec-4a1a-a7e7-7192f3de3915] Running
	I0807 18:31:54.279638   44266 system_pods.go:61] "kube-vip-ha-198246-m02" [9ef1c5a2-7829-4937-972d-ce53f60064f8] Running
	I0807 18:31:54.279643   44266 system_pods.go:61] "kube-vip-ha-198246-m03" [ba0ab294-fb6f-4161-82f7-288a2a0d4f13] Running
	I0807 18:31:54.279649   44266 system_pods.go:61] "storage-provisioner" [88457253-9aa8-4bd7-974f-1b47b341d40c] Running
	I0807 18:31:54.279657   44266 system_pods.go:74] duration metric: took 187.820696ms to wait for pod list to return data ...
	I0807 18:31:54.279670   44266 default_sa.go:34] waiting for default service account to be created ...
	I0807 18:31:54.465078   44266 request.go:629] Waited for 185.333525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:31:54.465131   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/default/serviceaccounts
	I0807 18:31:54.465136   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.465143   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.465169   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.467798   44266 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0807 18:31:54.467923   44266 default_sa.go:45] found service account: "default"
	I0807 18:31:54.467940   44266 default_sa.go:55] duration metric: took 188.262232ms for default service account to be created ...
	I0807 18:31:54.467950   44266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0807 18:31:54.664308   44266 request.go:629] Waited for 196.296927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:54.664402   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/namespaces/kube-system/pods
	I0807 18:31:54.664413   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.664425   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.664436   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.673358   44266 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0807 18:31:54.681646   44266 system_pods.go:86] 24 kube-system pods found
	I0807 18:31:54.681689   44266 system_pods.go:89] "coredns-7db6d8ff4d-rbnrx" [96fa387b-f93b-40df-9ed6-78834f3d02df] Running
	I0807 18:31:54.681698   44266 system_pods.go:89] "coredns-7db6d8ff4d-w6w6g" [143456ef-ffd1-4d42-b9d0-6b778094eca5] Running
	I0807 18:31:54.681710   44266 system_pods.go:89] "etcd-ha-198246" [861c9809-7151-4564-acae-2ad35ada4196] Running
	I0807 18:31:54.681722   44266 system_pods.go:89] "etcd-ha-198246-m02" [af692dc4-ba35-4226-999d-28fa1a44235c] Running
	I0807 18:31:54.681729   44266 system_pods.go:89] "etcd-ha-198246-m03" [8df491af-6c48-41d6-873f-c1c39afac2f8] Running
	I0807 18:31:54.681736   44266 system_pods.go:89] "kindnet-7854s" [f87d6292-b9b6-4f63-912c-9dfda0471e2e] Running
	I0807 18:31:54.681743   44266 system_pods.go:89] "kindnet-8x6fj" [24dceff9-a78c-47c7-9d36-01fbd62ee362] Running
	I0807 18:31:54.681760   44266 system_pods.go:89] "kindnet-sgl8v" [574aa453-48ef-44ff-b10a-13142fc8cf7f] Running
	I0807 18:31:54.681767   44266 system_pods.go:89] "kube-apiserver-ha-198246" [52e51327-3341-452e-b7bd-95a80adde42f] Running
	I0807 18:31:54.681773   44266 system_pods.go:89] "kube-apiserver-ha-198246-m02" [a983198b-7df1-45bb-bd75-61b345d7397c] Running
	I0807 18:31:54.681781   44266 system_pods.go:89] "kube-apiserver-ha-198246-m03" [c589756a-dda8-44a8-82bb-60532e74eb8b] Running
	I0807 18:31:54.681794   44266 system_pods.go:89] "kube-controller-manager-ha-198246" [73522726-984c-4c1a-9eb6-c0c2eb896b45] Running
	I0807 18:31:54.681805   44266 system_pods.go:89] "kube-controller-manager-ha-198246-m02" [84840391-d86d-45e5-a4f7-6daabbe16557] Running
	I0807 18:31:54.681820   44266 system_pods.go:89] "kube-controller-manager-ha-198246-m03" [5e0d97af-b071-4467-8c3a-dc71f904e84c] Running
	I0807 18:31:54.681830   44266 system_pods.go:89] "kube-proxy-4l79v" [649e12b4-4e77-48a9-af9c-691694c4ec99] Running
	I0807 18:31:54.681838   44266 system_pods.go:89] "kube-proxy-7mttr" [7cb96f6e-47a5-4d6c-a80e-77df1eafc970] Running
	I0807 18:31:54.681848   44266 system_pods.go:89] "kube-proxy-m5ng2" [ed3a0c5c-ff85-48e4-9165-329e89fdb64a] Running
	I0807 18:31:54.682159   44266 system_pods.go:89] "kube-scheduler-ha-198246" [dd45e791-8b98-4d64-8131-c2736463faae] Running
	I0807 18:31:54.682175   44266 system_pods.go:89] "kube-scheduler-ha-198246-m02" [f9571af0-65a0-46eb-98ce-d982fa4a2cce] Running
	I0807 18:31:54.682180   44266 system_pods.go:89] "kube-scheduler-ha-198246-m03" [5fe100c3-b0a4-4499-a7e2-330c88ee8162] Running
	I0807 18:31:54.682185   44266 system_pods.go:89] "kube-vip-ha-198246" [a230b27d-cbec-4a1a-a7e7-7192f3de3915] Running
	I0807 18:31:54.682188   44266 system_pods.go:89] "kube-vip-ha-198246-m02" [9ef1c5a2-7829-4937-972d-ce53f60064f8] Running
	I0807 18:31:54.682192   44266 system_pods.go:89] "kube-vip-ha-198246-m03" [ba0ab294-fb6f-4161-82f7-288a2a0d4f13] Running
	I0807 18:31:54.682196   44266 system_pods.go:89] "storage-provisioner" [88457253-9aa8-4bd7-974f-1b47b341d40c] Running
	I0807 18:31:54.682205   44266 system_pods.go:126] duration metric: took 214.246128ms to wait for k8s-apps to be running ...
	I0807 18:31:54.682217   44266 system_svc.go:44] waiting for kubelet service to be running ....
	I0807 18:31:54.682265   44266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:31:54.698973   44266 system_svc.go:56] duration metric: took 16.748968ms WaitForService to wait for kubelet
	I0807 18:31:54.699002   44266 kubeadm.go:582] duration metric: took 23.60847153s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:31:54.699020   44266 node_conditions.go:102] verifying NodePressure condition ...
	I0807 18:31:54.864327   44266 request.go:629] Waited for 165.224496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.196:8443/api/v1/nodes
	I0807 18:31:54.864388   44266 round_trippers.go:463] GET https://192.168.39.196:8443/api/v1/nodes
	I0807 18:31:54.864395   44266 round_trippers.go:469] Request Headers:
	I0807 18:31:54.864407   44266 round_trippers.go:473]     Accept: application/json, */*
	I0807 18:31:54.864413   44266 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0807 18:31:54.867905   44266 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0807 18:31:54.868930   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:31:54.868950   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:31:54.868961   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:31:54.868964   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:31:54.868968   44266 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0807 18:31:54.868971   44266 node_conditions.go:123] node cpu capacity is 2
	I0807 18:31:54.868974   44266 node_conditions.go:105] duration metric: took 169.949978ms to run NodePressure ...
	I0807 18:31:54.868985   44266 start.go:241] waiting for startup goroutines ...
	I0807 18:31:54.869001   44266 start.go:255] writing updated cluster config ...
	I0807 18:31:54.869277   44266 ssh_runner.go:195] Run: rm -f paused
	I0807 18:31:54.921624   44266 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0807 18:31:54.924829   44266 out.go:177] * Done! kubectl is now configured to use "ha-198246" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.916507652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ccaf0e2-a893-4574-8b98-4180dd96eaa7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.916606753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ccaf0e2-a893-4574-8b98-4180dd96eaa7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.918357467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723055519101351291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319342376725,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"U
DP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723055319067011712,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93fcff9b17b4b2366750c04f15288dda856a885fa1e95d4510a83b2b14b855a9,PodSandboxId:885cc92388628d238f8733c8a4e19dbe966de1d74cae5f0b0260d47f543204eb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING
,CreatedAt:1723055318987833300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CON
TAINER_RUNNING,CreatedAt:1723055306768350208,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723055302
363392306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305290711d5443ffae9e64678e692b52bbffed39cc06b059026f167d97c5e98d,PodSandboxId:c3113eff4cbeab6d11557ebe28457c4fed8b799968cd7a8112552a9f26c0c7a1,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172305528372
0347825,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f267a1609da84deb6a231872d87975b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473,PodSandboxId:1fcd84f97f1d17549fda334f2d795061561cad20b325aed47c328b7537d9e461,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723055280599506170,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723055280563764082,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723055280588797776,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36,PodSandboxId:30588dee2a435159b1676038c3a1e71d8e794c98f645bd6032392139ac087781,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723055280520038813,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ccaf0e2-a893-4574-8b98-4180dd96eaa7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.922224791Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1a5ea813-776b-4bc2-afeb-3d7358cd03b5 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.922375322Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055519175572897,StartedAt:1723055519208613635,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox:1.28,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/42848aea-5e18-4f5c-b59d-f615d5128a74/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/42848aea-5e18-4f5c-b59d-f615d5128a74/containers/busybox/565e84e5,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/42848aea-5e18-4f5c-b59d-f615d5128a74/volumes/kubernetes.io~projected/kube-api-access-mdsts,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_busybox-fc5497c4f-chh26_42848aea-5e18-4f5c-b59d-f615d5128a74/busybox/0.log,Resources:&ContainerR
esources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1a5ea813-776b-4bc2-afeb-3d7358cd03b5 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.922969436Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,Verbose:false,}" file="otel-collector/interceptors.go:62" id=2e3d5314-4015-4b9e-b46e-cfa4d03d7b4d name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.923089958Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055319394161848,StartedAt:1723055319419666821,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPo
rt\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/143456ef-ffd1-4d42-b9d0-6b778094eca5/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/143456ef-ffd1-4d42-b9d0-6b778094eca5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/143456ef-ffd1-4d42-b9d0-6b778094eca5/containers/coredns/8bfec987,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGAT
ION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/143456ef-ffd1-4d42-b9d0-6b778094eca5/volumes/kubernetes.io~projected/kube-api-access-j55sb,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-w6w6g_143456ef-ffd1-4d42-b9d0-6b778094eca5/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2e3d5314-4015-4b9e-b46e-cfa4d03d7b4d name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.923717167Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5e7f7aad-01f7-4283-92db-377f3238557d name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.923852445Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055319131700926,StartedAt:1723055319166704711,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerP
ort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/96fa387b-f93b-40df-9ed6-78834f3d02df/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/96fa387b-f93b-40df-9ed6-78834f3d02df/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/96fa387b-f93b-40df-9ed6-78834f3d02df/containers/coredns/a2123e16,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGA
TION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/96fa387b-f93b-40df-9ed6-78834f3d02df/volumes/kubernetes.io~projected/kube-api-access-gcsfw,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-7db6d8ff4d-rbnrx_96fa387b-f93b-40df-9ed6-78834f3d02df/coredns/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5e7f7aad-01f7-4283-92db-377f3238557d name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.924361817Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:93fcff9b17b4b2366750c04f15288dda856a885fa1e95d4510a83b2b14b855a9,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5ae3af99-38b3-4ed8-9b4a-52991d77d6bf name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.924535215Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:93fcff9b17b4b2366750c04f15288dda856a885fa1e95d4510a83b2b14b855a9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055319061825377,StartedAt:1723055319101021240,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/storage-provisioner:v5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/tmp,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/88457253-9aa8-4bd7-974f-1b47b341d40c/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/88457253-9aa8-4bd7-974f-1b47b341d40c/containers/storage-provisioner/36e13ac9,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/88457253-9aa8-4bd7-974f-1b47b341d40c/volumes/kubernetes.io~projected/kube-api-access-ts7zg,Readonly:true,SelinuxRelabel:false,Propag
ation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_storage-provisioner_88457253-9aa8-4bd7-974f-1b47b341d40c/storage-provisioner/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5ae3af99-38b3-4ed8-9b4a-52991d77d6bf name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.924998967Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d0873502-6730-4f9a-b535-2e085ccd0f3d name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.925223566Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055306994672118,StartedAt:1723055307021875248,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kindest/kindnetd:v20240730-75a5af0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/574aa453-48ef-44ff-b10a-13142fc8cf7f/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/574aa453-48ef-44ff-b10a-13142fc8cf7f/containers/kindnet-cni/56b06f91,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/cni/net.d,HostPath:/etc/cn
i/net.d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/574aa453-48ef-44ff-b10a-13142fc8cf7f/volumes/kubernetes.io~projected/kube-api-access-llmrx,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kindnet-sgl8v_574aa453-48ef-44ff-b10a-13142fc8cf7f/kindnet-cni/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:10000,CpuShares:102,MemoryLimitInBytes:52428800,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:52428800,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d0873502-6730-4f9a-b535-2e085ccd0f3d n
ame=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.925824537Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=301d8d89-ed2c-489b-b881-2faf52267ba1 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.927840711Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055302400834776,StartedAt:1723055302440508558,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/649e12b4-4e77-48a9-af9c-691694c4ec99/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/649e12b4-4e77-48a9-af9c-691694c4ec99/containers/kube-proxy/e55c1361,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kube
let/pods/649e12b4-4e77-48a9-af9c-691694c4ec99/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/649e12b4-4e77-48a9-af9c-691694c4ec99/volumes/kubernetes.io~projected/kube-api-access-t78jp,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-4l79v_649e12b4-4e77-48a9-af9c-691694c4ec99/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector
/interceptors.go:74" id=301d8d89-ed2c-489b-b881-2faf52267ba1 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.928537653Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:305290711d5443ffae9e64678e692b52bbffed39cc06b059026f167d97c5e98d,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8cbf19b6-7408-4928-aa5c-90534706717a name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.928729904Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:305290711d5443ffae9e64678e692b52bbffed39cc06b059026f167d97c5e98d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055283766281126,StartedAt:1723055283787898463,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip:v0.8.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f267a1609da84deb6a231872d87975b,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminati
onMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0f267a1609da84deb6a231872d87975b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0f267a1609da84deb6a231872d87975b/containers/kube-vip/e3da8bde,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/admin.conf,HostPath:/etc/kubernetes/super-admin.conf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-vip-ha-198246_0f267a1609da84deb6a231872d87975b/kube-vip/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj
:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8cbf19b6-7408-4928-aa5c-90534706717a name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.929337771Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473,Verbose:false,}" file="otel-collector/interceptors.go:62" id=9e9baea0-1c1b-4e52-9539-1a6f2fa2a837 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.929515922Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055280762879447,StartedAt:1723055280885877522,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b2b91906fc54e8232161e687fc4a9af5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b2b91906fc54e8232161e687fc4a9af5/containers/kube-apiserver/ce43c781,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/
minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-ha-198246_b2b91906fc54e8232161e687fc4a9af5/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=9e9baea0-1c1b-4e52-9539-1a6f2fa2a837 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.930009643Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,Verbose:false,}" file="otel-collector/interceptors.go:62" id=fde295d1-86f9-4300-a95c-34726ca522b7 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.930123955Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055280719113861,StartedAt:1723055280840567204,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c60b0b92792ae1d5ba11a7a2e649f612/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c60b0b92792ae1d5ba11a7a2e649f612/containers/etcd/a1ca489d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-ha-198246_c60b0b
92792ae1d5ba11a7a2e649f612/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=fde295d1-86f9-4300-a95c-34726ca522b7 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.930939576Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d7024c1e-88ab-4a81-b036-4684d66228b5 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.931063160Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055280692071621,StartedAt:1723055280853791837,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/56b90546fb511b52cb0b98695e572bae/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/56b90546fb511b52cb0b98695e572bae/containers/kube-scheduler/3d042067,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-ha-198246_56b90546fb511b52cb0b98695e572bae/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,C
puShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d7024c1e-88ab-4a81-b036-4684d66228b5 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.932138698Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7cf47caf-5fe8-4381-b3e7-88a90f67c940 name=/runtime.v1.RuntimeService/ContainerStatus
	Aug 07 18:37:06 ha-198246 crio[680]: time="2024-08-07 18:37:06.932525483Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1723055280608390721,StartedAt:1723055280725655787,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b12d62604f0b70faa552e6c44d8cd532/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b12d62604f0b70faa552e6c44d8cd532/containers/kube-controller-manager/942d5da7,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*
IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-ha-198246_b12d62604f0b70faa552e6c44d8cd532/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*H
ugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7cf47caf-5fe8-4381-b3e7-88a90f67c940 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	80335e9819afd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   5 minutes ago       Running             busybox                   0                   4d0990efdcee8       busybox-fc5497c4f-chh26
	806c3ba54cd9b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   a5394b2f1434b       coredns-7db6d8ff4d-w6w6g
	3f9784c457acb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   b57adade6ea15       coredns-7db6d8ff4d-rbnrx
	93fcff9b17b4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   885cc92388628       storage-provisioner
	5433090bdddca       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    8 minutes ago       Running             kindnet-cni               0                   bd5d340b4a584       kindnet-sgl8v
	c6c6220e1a7fb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago       Running             kube-proxy                0                   4aec116af531d       kube-proxy-4l79v
	305290711d544       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     9 minutes ago       Running             kube-vip                  0                   c3113eff4cbea       kube-vip-ha-198246
	4902df4367b62       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      9 minutes ago       Running             kube-apiserver            0                   1fcd84f97f1d1       kube-apiserver-ha-198246
	2ff4075c05c48       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      9 minutes ago       Running             kube-scheduler            0                   7c56ff7ba09a0       kube-scheduler-ha-198246
	981dfd0662596       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Running             etcd                      0                   0e8285057cc05       etcd-ha-198246
	6c84edcc5a98f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      9 minutes ago       Running             kube-controller-manager   0                   30588dee2a435       kube-controller-manager-ha-198246
	
	
	==> coredns [3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7] <==
	[INFO] 10.244.1.2:60491 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000238403s
	[INFO] 10.244.1.2:56734 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110021s
	[INFO] 10.244.0.4:60444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100154s
	[INFO] 10.244.0.4:54868 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007045s
	[INFO] 10.244.0.4:55542 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001278843s
	[INFO] 10.244.0.4:41062 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090569s
	[INFO] 10.244.0.4:45221 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159605s
	[INFO] 10.244.0.4:52919 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008416s
	[INFO] 10.244.2.2:57336 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947478s
	[INFO] 10.244.2.2:58778 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148421s
	[INFO] 10.244.2.2:40534 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094901s
	[INFO] 10.244.2.2:34562 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001435891s
	[INFO] 10.244.2.2:40255 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066647s
	[INFO] 10.244.2.2:33303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074642s
	[INFO] 10.244.2.2:54865 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065816s
	[INFO] 10.244.1.2:56362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135028s
	[INFO] 10.244.1.2:50486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103508s
	[INFO] 10.244.0.4:60915 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079398s
	[INFO] 10.244.2.2:36331 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189607s
	[INFO] 10.244.1.2:44020 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000226665s
	[INFO] 10.244.1.2:47459 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129465s
	[INFO] 10.244.0.4:59992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000059798s
	[INFO] 10.244.0.4:55811 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139124s
	[INFO] 10.244.2.2:42718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132316s
	[INFO] 10.244.2.2:34338 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147334s
	
	
	==> coredns [806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab] <==
	[INFO] 10.244.0.4:54342 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000106253s
	[INFO] 10.244.2.2:37220 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.00009521s
	[INFO] 10.244.2.2:40447 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001945707s
	[INFO] 10.244.2.2:46546 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.003736918s
	[INFO] 10.244.1.2:40239 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121833s
	[INFO] 10.244.1.2:39185 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003274854s
	[INFO] 10.244.1.2:32995 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301562s
	[INFO] 10.244.1.2:57764 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00324711s
	[INFO] 10.244.0.4:43175 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001969935s
	[INFO] 10.244.0.4:47947 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090373s
	[INFO] 10.244.2.2:59435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185691s
	[INFO] 10.244.1.2:41342 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215074s
	[INFO] 10.244.1.2:58323 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133762s
	[INFO] 10.244.0.4:48395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131554s
	[INFO] 10.244.0.4:33157 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121525s
	[INFO] 10.244.0.4:53506 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084053s
	[INFO] 10.244.2.2:47826 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205944s
	[INFO] 10.244.2.2:43418 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113361s
	[INFO] 10.244.2.2:53197 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103281s
	[INFO] 10.244.1.2:51874 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001263s
	[INFO] 10.244.1.2:40094 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205313s
	[INFO] 10.244.0.4:55591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001033s
	[INFO] 10.244.0.4:41281 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083191s
	[INFO] 10.244.2.2:52214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093799s
	[INFO] 10.244.2.2:55578 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146065s
	
	
	==> describe nodes <==
	Name:               ha-198246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T18_28_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:28:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:37:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:32:12 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:32:12 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:32:12 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:32:12 +0000   Wed, 07 Aug 2024 18:28:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    ha-198246
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e31604902e0745d1a1407795d2ccbfc0
	  System UUID:                e3160490-2e07-45d1-a140-7795d2ccbfc0
	  Boot ID:                    9b0f1850-84af-432c-85c0-f24cda670347
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-chh26              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 coredns-7db6d8ff4d-rbnrx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m47s
	  kube-system                 coredns-7db6d8ff4d-w6w6g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m47s
	  kube-system                 etcd-ha-198246                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 kindnet-sgl8v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m47s
	  kube-system                 kube-apiserver-ha-198246             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-controller-manager-ha-198246    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-proxy-4l79v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m47s
	  kube-system                 kube-scheduler-ha-198246             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-vip-ha-198246                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m44s  kube-proxy       
	  Normal  Starting                 9m1s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m1s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m1s   kubelet          Node ha-198246 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m1s   kubelet          Node ha-198246 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m1s   kubelet          Node ha-198246 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m47s  node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal  NodeReady                8m29s  kubelet          Node ha-198246 status is now: NodeReady
	  Normal  RegisteredNode           6m42s  node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal  RegisteredNode           5m23s  node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	
	
	Name:               ha-198246-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_30_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:30:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:33:31 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 07 Aug 2024 18:32:09 +0000   Wed, 07 Aug 2024 18:34:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 07 Aug 2024 18:32:09 +0000   Wed, 07 Aug 2024 18:34:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 07 Aug 2024 18:32:09 +0000   Wed, 07 Aug 2024 18:34:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 07 Aug 2024 18:32:09 +0000   Wed, 07 Aug 2024 18:34:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-198246-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8eadf45fa3a45c1ace8b37287f97c9d
	  System UUID:                b8eadf45-fa3a-45c1-ace8-b37287f97c9d
	  Boot ID:                    7900c294-c092-44a8-b18b-e0879a5b10ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g62d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 etcd-ha-198246-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m58s
	  kube-system                 kindnet-8x6fj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m
	  kube-system                 kube-apiserver-ha-198246-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                 kube-controller-manager-ha-198246-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m54s
	  kube-system                 kube-proxy-m5ng2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-scheduler-ha-198246-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m53s
	  kube-system                 kube-vip-ha-198246-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 6m55s            kube-proxy       
	  Normal  NodeHasSufficientMemory  7m (x8 over 7m)  kubelet          Node ha-198246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m (x8 over 7m)  kubelet          Node ha-198246-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m (x7 over 7m)  kubelet          Node ha-198246-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m57s            node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           6m42s            node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           5m23s            node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  NodeNotReady             2m53s            node-controller  Node ha-198246-m02 status is now: NodeNotReady
	
	
	Name:               ha-198246-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_31_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:31:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:37:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:32:28 +0000   Wed, 07 Aug 2024 18:31:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:32:28 +0000   Wed, 07 Aug 2024 18:31:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:32:28 +0000   Wed, 07 Aug 2024 18:31:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:32:28 +0000   Wed, 07 Aug 2024 18:31:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-198246-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60409ac81f5346078f5f2d7599678540
	  System UUID:                60409ac8-1f53-4607-8f5f-2d7599678540
	  Boot ID:                    30ed0e62-43cd-4d25-85c3-6ffd341eb52a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k2t25                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 etcd-ha-198246-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m39s
	  kube-system                 kindnet-7854s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m41s
	  kube-system                 kube-apiserver-ha-198246-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  kube-system                 kube-controller-manager-ha-198246-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kube-system                 kube-proxy-7mttr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 kube-scheduler-ha-198246-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 kube-vip-ha-198246-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m41s)  kubelet          Node ha-198246-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m41s)  kubelet          Node ha-198246-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m41s)  kubelet          Node ha-198246-m03 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	
	
	Name:               ha-198246-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_32_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:32:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:36:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:33:21 +0000   Wed, 07 Aug 2024 18:32:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:33:21 +0000   Wed, 07 Aug 2024 18:32:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:33:21 +0000   Wed, 07 Aug 2024 18:32:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:33:21 +0000   Wed, 07 Aug 2024 18:33:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-198246-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e050b6016e8b45679acbdd2b5c7bde62
	  System UUID:                e050b601-6e8b-4567-9acb-dd2b5c7bde62
	  Boot ID:                    3b3e9caf-949c-417a-90da-edc98697cdac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5vj44       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m35s
	  kube-system                 kube-proxy-5ggpl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m35s (x2 over 4m35s)  kubelet          Node ha-198246-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s (x2 over 4m35s)  kubelet          Node ha-198246-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s (x2 over 4m35s)  kubelet          Node ha-198246-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m33s                  node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal  RegisteredNode           4m32s                  node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal  NodeReady                3m46s                  kubelet          Node ha-198246-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 7 18:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050670] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040191] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.791892] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.561405] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.603000] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.529902] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.057949] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071605] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.183672] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.110780] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.300871] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.248154] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.501138] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.062750] kauditd_printk_skb: 158 callbacks suppressed
	[Aug 7 18:28] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.095778] kauditd_printk_skb: 79 callbacks suppressed
	[ +15.277376] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.193932] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 7 18:30] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5] <==
	{"level":"warn","ts":"2024-08-07T18:37:07.140905Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.175974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.210879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.220863Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.228557Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.240811Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.247136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.254918Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.261886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.265798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.270066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.278786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.285305Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.291403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.295725Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.299108Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.308593Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.31484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.321185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.325061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.328272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.335201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.340784Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.347114Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:37:07.354867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:37:07 up 9 min,  0 users,  load average: 0.87, 0.43, 0.22
	Linux ha-198246 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554] <==
	I0807 18:36:28.091744       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:36:38.095310       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:36:38.095358       1 main.go:299] handling current node
	I0807 18:36:38.095378       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:36:38.095383       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:36:38.095613       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:36:38.095642       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:36:38.095699       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:36:38.095704       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:36:48.099667       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:36:48.099716       1 main.go:299] handling current node
	I0807 18:36:48.099730       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:36:48.099735       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:36:48.099887       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:36:48.099909       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:36:48.099960       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:36:48.099966       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:36:58.093506       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:36:58.093745       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:36:58.093935       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:36:58.093963       1 main.go:299] handling current node
	I0807 18:36:58.094004       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:36:58.094021       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:36:58.094139       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:36:58.094166       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473] <==
	I0807 18:28:05.757651       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0807 18:28:05.765720       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.196]
	I0807 18:28:05.766882       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 18:28:05.772395       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0807 18:28:05.830060       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 18:28:06.776266       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 18:28:06.809546       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0807 18:28:06.821673       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 18:28:20.248559       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0807 18:28:20.348011       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0807 18:32:00.535866       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57744: use of closed network connection
	E0807 18:32:00.744066       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57766: use of closed network connection
	E0807 18:32:00.952672       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57792: use of closed network connection
	E0807 18:32:01.172355       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57812: use of closed network connection
	E0807 18:32:01.352150       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57828: use of closed network connection
	E0807 18:32:01.532194       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57846: use of closed network connection
	E0807 18:32:01.714325       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57860: use of closed network connection
	E0807 18:32:01.900647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57872: use of closed network connection
	E0807 18:32:02.087553       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57896: use of closed network connection
	E0807 18:32:02.383817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57920: use of closed network connection
	E0807 18:32:02.568053       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57936: use of closed network connection
	E0807 18:32:02.768857       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57942: use of closed network connection
	E0807 18:32:02.971250       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57960: use of closed network connection
	E0807 18:32:03.156171       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57992: use of closed network connection
	E0807 18:32:03.335581       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58008: use of closed network connection
	
	
	==> kube-controller-manager [6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36] <==
	I0807 18:31:30.326416       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198246-m03"
	I0807 18:31:55.863780       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.509454ms"
	I0807 18:31:55.909903       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.038129ms"
	I0807 18:31:56.006853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.823451ms"
	I0807 18:31:56.148782       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="141.712519ms"
	I0807 18:31:56.149891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="322.281µs"
	I0807 18:31:56.191596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.550452ms"
	I0807 18:31:56.191748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.564µs"
	I0807 18:31:56.760379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.592µs"
	I0807 18:31:56.902720       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.37µs"
	I0807 18:31:57.234083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.456µs"
	I0807 18:31:59.698073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.032139ms"
	I0807 18:31:59.698278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.373µs"
	I0807 18:31:59.804042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.215774ms"
	I0807 18:31:59.804158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.476µs"
	I0807 18:32:00.080762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.328982ms"
	I0807 18:32:00.082206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.736µs"
	E0807 18:32:32.063101       1 certificate_controller.go:146] Sync csr-btqqk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-btqqk": the object has been modified; please apply your changes to the latest version and try again
	I0807 18:32:32.310340       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-198246-m04\" does not exist"
	I0807 18:32:32.379172       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-198246-m04" podCIDRs=["10.244.3.0/24"]
	I0807 18:32:35.352861       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198246-m04"
	I0807 18:33:21.056413       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-198246-m04"
	I0807 18:34:14.817275       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-198246-m04"
	I0807 18:34:14.871985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.917944ms"
	I0807 18:34:14.873327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.625µs"
	
	
	==> kube-proxy [c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8] <==
	I0807 18:28:22.580618       1 server_linux.go:69] "Using iptables proxy"
	I0807 18:28:22.601637       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.196"]
	I0807 18:28:22.654297       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 18:28:22.654381       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 18:28:22.654403       1 server_linux.go:165] "Using iptables Proxier"
	I0807 18:28:22.658197       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 18:28:22.658748       1 server.go:872] "Version info" version="v1.30.3"
	I0807 18:28:22.658783       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:28:22.661148       1 config.go:192] "Starting service config controller"
	I0807 18:28:22.661385       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 18:28:22.661502       1 config.go:101] "Starting endpoint slice config controller"
	I0807 18:28:22.661508       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 18:28:22.662750       1 config.go:319] "Starting node config controller"
	I0807 18:28:22.662780       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 18:28:22.761662       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 18:28:22.761768       1 shared_informer.go:320] Caches are synced for service config
	I0807 18:28:22.763105       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56] <==
	E0807 18:28:05.163012       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 18:28:05.164577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 18:28:05.164616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0807 18:28:05.283884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 18:28:05.283932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0807 18:28:05.320413       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 18:28:05.320504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:28:05.373610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 18:28:05.373694       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0807 18:28:06.678552       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0807 18:32:32.502898       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-z8cdn\": pod kindnet-z8cdn is already assigned to node \"ha-198246-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-z8cdn" node="ha-198246-m04"
	E0807 18:32:32.503513       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bc6ed049-d9fb-4132-b192-8015240cb919(kube-system/kindnet-z8cdn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-z8cdn"
	E0807 18:32:32.503593       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-z8cdn\": pod kindnet-z8cdn is already assigned to node \"ha-198246-m04\"" pod="kube-system/kindnet-z8cdn"
	I0807 18:32:32.503644       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-z8cdn" node="ha-198246-m04"
	E0807 18:32:32.551938       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cv65q\": pod kube-proxy-cv65q is already assigned to node \"ha-198246-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cv65q" node="ha-198246-m04"
	E0807 18:32:32.553290       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cv65q\": pod kube-proxy-cv65q is already assigned to node \"ha-198246-m04\"" pod="kube-system/kube-proxy-cv65q"
	E0807 18:32:32.556989       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vndzm\": pod kindnet-vndzm is already assigned to node \"ha-198246-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vndzm" node="ha-198246-m04"
	E0807 18:32:32.557081       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vndzm\": pod kindnet-vndzm is already assigned to node \"ha-198246-m04\"" pod="kube-system/kindnet-vndzm"
	E0807 18:32:36.244172       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5ggpl\": pod kube-proxy-5ggpl is already assigned to node \"ha-198246-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5ggpl" node="ha-198246-m04"
	E0807 18:32:36.244315       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2ed71e43-edd6-4262-a1ed-a3232e717574(kube-system/kube-proxy-5ggpl) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-5ggpl"
	E0807 18:32:36.244399       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5ggpl\": pod kube-proxy-5ggpl is already assigned to node \"ha-198246-m04\"" pod="kube-system/kube-proxy-5ggpl"
	I0807 18:32:36.245064       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-5ggpl" node="ha-198246-m04"
	E0807 18:32:36.281841       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tdszb\": pod kube-proxy-tdszb is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kube-proxy-tdszb" node="ha-198246-m04"
	E0807 18:32:36.281939       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tdszb\": pod kube-proxy-tdszb is being deleted, cannot be assigned to a host" pod="kube-system/kube-proxy-tdszb"
	E0807 18:32:36.330630       1 schedule_one.go:1095] "Error updating pod" err="pods \"kube-proxy-tdszb\" not found" pod="kube-system/kube-proxy-tdszb"
	
	
	==> kubelet <==
	Aug 07 18:33:06 ha-198246 kubelet[1372]: E0807 18:33:06.768553    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:33:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:33:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:33:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:33:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:34:06 ha-198246 kubelet[1372]: E0807 18:34:06.758391    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:34:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:34:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:34:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:34:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:35:06 ha-198246 kubelet[1372]: E0807 18:35:06.757102    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:35:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:35:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:35:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:35:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:36:06 ha-198246 kubelet[1372]: E0807 18:36:06.757687    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:36:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:36:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:36:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:36:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:37:06 ha-198246 kubelet[1372]: E0807 18:37:06.772563    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:37:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:37:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:37:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:37:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198246 -n ha-198246
helpers_test.go:261: (dbg) Run:  kubectl --context ha-198246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-198246 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-198246 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-198246 -v=7 --alsologtostderr: exit status 82 (2m1.89282238s)

                                                
                                                
-- stdout --
	* Stopping node "ha-198246-m04"  ...
	* Stopping node "ha-198246-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:37:08.809428   50452 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:37:08.809569   50452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:37:08.809578   50452 out.go:304] Setting ErrFile to fd 2...
	I0807 18:37:08.809582   50452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:37:08.809781   50452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:37:08.809999   50452 out.go:298] Setting JSON to false
	I0807 18:37:08.810087   50452 mustload.go:65] Loading cluster: ha-198246
	I0807 18:37:08.810427   50452 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:37:08.810509   50452 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:37:08.810671   50452 mustload.go:65] Loading cluster: ha-198246
	I0807 18:37:08.810794   50452 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:37:08.810820   50452 stop.go:39] StopHost: ha-198246-m04
	I0807 18:37:08.811201   50452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:08.811237   50452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:08.825666   50452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
	I0807 18:37:08.826083   50452 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:08.826620   50452 main.go:141] libmachine: Using API Version  1
	I0807 18:37:08.826640   50452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:08.827010   50452 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:08.829565   50452 out.go:177] * Stopping node "ha-198246-m04"  ...
	I0807 18:37:08.831089   50452 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0807 18:37:08.831122   50452 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:37:08.831367   50452 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0807 18:37:08.831396   50452 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:37:08.834101   50452 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:37:08.834510   50452 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:32:18 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:37:08.834541   50452 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:37:08.834616   50452 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:37:08.834794   50452 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:37:08.834949   50452 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:37:08.835049   50452 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:37:08.925148   50452 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0807 18:37:08.979238   50452 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0807 18:37:09.035840   50452 main.go:141] libmachine: Stopping "ha-198246-m04"...
	I0807 18:37:09.035863   50452 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:37:09.037426   50452 main.go:141] libmachine: (ha-198246-m04) Calling .Stop
	I0807 18:37:09.041049   50452 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 0/120
	I0807 18:37:10.231965   50452 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:37:10.233241   50452 main.go:141] libmachine: Machine "ha-198246-m04" was stopped.
	I0807 18:37:10.233256   50452 stop.go:75] duration metric: took 1.402170568s to stop
	I0807 18:37:10.233290   50452 stop.go:39] StopHost: ha-198246-m03
	I0807 18:37:10.233592   50452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:37:10.233633   50452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:37:10.248266   50452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0807 18:37:10.248706   50452 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:37:10.249184   50452 main.go:141] libmachine: Using API Version  1
	I0807 18:37:10.249208   50452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:37:10.249580   50452 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:37:10.251833   50452 out.go:177] * Stopping node "ha-198246-m03"  ...
	I0807 18:37:10.253196   50452 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0807 18:37:10.253238   50452 main.go:141] libmachine: (ha-198246-m03) Calling .DriverName
	I0807 18:37:10.253505   50452 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0807 18:37:10.253531   50452 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHHostname
	I0807 18:37:10.256659   50452 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:37:10.257057   50452 main.go:141] libmachine: (ha-198246-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:24:52", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:30:48 +0000 UTC Type:0 Mac:52:54:00:9d:24:52 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-198246-m03 Clientid:01:52:54:00:9d:24:52}
	I0807 18:37:10.257089   50452 main.go:141] libmachine: (ha-198246-m03) DBG | domain ha-198246-m03 has defined IP address 192.168.39.227 and MAC address 52:54:00:9d:24:52 in network mk-ha-198246
	I0807 18:37:10.257293   50452 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHPort
	I0807 18:37:10.257553   50452 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHKeyPath
	I0807 18:37:10.257747   50452 main.go:141] libmachine: (ha-198246-m03) Calling .GetSSHUsername
	I0807 18:37:10.257951   50452 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m03/id_rsa Username:docker}
	I0807 18:37:10.348342   50452 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0807 18:37:10.403389   50452 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0807 18:37:10.458987   50452 main.go:141] libmachine: Stopping "ha-198246-m03"...
	I0807 18:37:10.459013   50452 main.go:141] libmachine: (ha-198246-m03) Calling .GetState
	I0807 18:37:10.460874   50452 main.go:141] libmachine: (ha-198246-m03) Calling .Stop
	I0807 18:37:10.464467   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 0/120
	I0807 18:37:11.466628   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 1/120
	I0807 18:37:12.468197   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 2/120
	I0807 18:37:13.469591   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 3/120
	I0807 18:37:14.471187   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 4/120
	I0807 18:37:15.472814   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 5/120
	I0807 18:37:16.474790   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 6/120
	I0807 18:37:17.476449   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 7/120
	I0807 18:37:18.477812   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 8/120
	I0807 18:37:19.479449   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 9/120
	I0807 18:37:20.481779   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 10/120
	I0807 18:37:21.483893   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 11/120
	I0807 18:37:22.485295   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 12/120
	I0807 18:37:23.486689   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 13/120
	I0807 18:37:24.488186   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 14/120
	I0807 18:37:25.490193   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 15/120
	I0807 18:37:26.491885   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 16/120
	I0807 18:37:27.493284   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 17/120
	I0807 18:37:28.494724   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 18/120
	I0807 18:37:29.496517   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 19/120
	I0807 18:37:30.498776   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 20/120
	I0807 18:37:31.500620   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 21/120
	I0807 18:37:32.502620   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 22/120
	I0807 18:37:33.504246   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 23/120
	I0807 18:37:34.505834   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 24/120
	I0807 18:37:35.507588   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 25/120
	I0807 18:37:36.509048   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 26/120
	I0807 18:37:37.510676   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 27/120
	I0807 18:37:38.512419   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 28/120
	I0807 18:37:39.513854   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 29/120
	I0807 18:37:40.515707   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 30/120
	I0807 18:37:41.517083   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 31/120
	I0807 18:37:42.518520   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 32/120
	I0807 18:37:43.519932   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 33/120
	I0807 18:37:44.521517   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 34/120
	I0807 18:37:45.523285   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 35/120
	I0807 18:37:46.524618   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 36/120
	I0807 18:37:47.526544   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 37/120
	I0807 18:37:48.527966   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 38/120
	I0807 18:37:49.530162   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 39/120
	I0807 18:37:50.531861   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 40/120
	I0807 18:37:51.533141   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 41/120
	I0807 18:37:52.534476   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 42/120
	I0807 18:37:53.535744   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 43/120
	I0807 18:37:54.537302   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 44/120
	I0807 18:37:55.538979   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 45/120
	I0807 18:37:56.540185   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 46/120
	I0807 18:37:57.541917   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 47/120
	I0807 18:37:58.543155   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 48/120
	I0807 18:37:59.544781   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 49/120
	I0807 18:38:00.546686   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 50/120
	I0807 18:38:01.547931   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 51/120
	I0807 18:38:02.549465   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 52/120
	I0807 18:38:03.550968   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 53/120
	I0807 18:38:04.552578   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 54/120
	I0807 18:38:05.554420   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 55/120
	I0807 18:38:06.555805   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 56/120
	I0807 18:38:07.557256   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 57/120
	I0807 18:38:08.558610   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 58/120
	I0807 18:38:09.560045   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 59/120
	I0807 18:38:10.561626   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 60/120
	I0807 18:38:11.563124   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 61/120
	I0807 18:38:12.564403   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 62/120
	I0807 18:38:13.565996   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 63/120
	I0807 18:38:14.567219   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 64/120
	I0807 18:38:15.569190   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 65/120
	I0807 18:38:16.571412   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 66/120
	I0807 18:38:17.572768   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 67/120
	I0807 18:38:18.574623   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 68/120
	I0807 18:38:19.575951   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 69/120
	I0807 18:38:20.577692   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 70/120
	I0807 18:38:21.578881   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 71/120
	I0807 18:38:22.580299   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 72/120
	I0807 18:38:23.581865   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 73/120
	I0807 18:38:24.583193   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 74/120
	I0807 18:38:25.585005   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 75/120
	I0807 18:38:26.586333   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 76/120
	I0807 18:38:27.587503   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 77/120
	I0807 18:38:28.588818   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 78/120
	I0807 18:38:29.590151   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 79/120
	I0807 18:38:30.592059   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 80/120
	I0807 18:38:31.593450   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 81/120
	I0807 18:38:32.594633   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 82/120
	I0807 18:38:33.595861   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 83/120
	I0807 18:38:34.597401   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 84/120
	I0807 18:38:35.599182   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 85/120
	I0807 18:38:36.600848   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 86/120
	I0807 18:38:37.602570   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 87/120
	I0807 18:38:38.604191   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 88/120
	I0807 18:38:39.605586   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 89/120
	I0807 18:38:40.607406   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 90/120
	I0807 18:38:41.608682   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 91/120
	I0807 18:38:42.609930   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 92/120
	I0807 18:38:43.611359   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 93/120
	I0807 18:38:44.612906   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 94/120
	I0807 18:38:45.614868   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 95/120
	I0807 18:38:46.616701   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 96/120
	I0807 18:38:47.617944   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 97/120
	I0807 18:38:48.619411   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 98/120
	I0807 18:38:49.620797   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 99/120
	I0807 18:38:50.622428   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 100/120
	I0807 18:38:51.623665   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 101/120
	I0807 18:38:52.624981   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 102/120
	I0807 18:38:53.626310   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 103/120
	I0807 18:38:54.627875   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 104/120
	I0807 18:38:55.629695   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 105/120
	I0807 18:38:56.630948   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 106/120
	I0807 18:38:57.632245   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 107/120
	I0807 18:38:58.633608   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 108/120
	I0807 18:38:59.635082   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 109/120
	I0807 18:39:00.636902   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 110/120
	I0807 18:39:01.638769   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 111/120
	I0807 18:39:02.640119   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 112/120
	I0807 18:39:03.641511   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 113/120
	I0807 18:39:04.642752   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 114/120
	I0807 18:39:05.644733   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 115/120
	I0807 18:39:06.646306   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 116/120
	I0807 18:39:07.647550   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 117/120
	I0807 18:39:08.649022   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 118/120
	I0807 18:39:09.650365   50452 main.go:141] libmachine: (ha-198246-m03) Waiting for machine to stop 119/120
	I0807 18:39:10.651208   50452 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0807 18:39:10.651281   50452 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0807 18:39:10.653340   50452 out.go:177] 
	W0807 18:39:10.654975   50452 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0807 18:39:10.654989   50452 out.go:239] * 
	* 
	W0807 18:39:10.657241   50452 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 18:39:10.658541   50452 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-198246 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-198246 --wait=true -v=7 --alsologtostderr
E0807 18:41:31.076598   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-198246 --wait=true -v=7 --alsologtostderr: (4m0.489398267s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-198246
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198246 -n ha-198246
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-198246 logs -n 25: (2.006626357s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m02:/home/docker/cp-test_ha-198246-m03_ha-198246-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m02 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04:/home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m04 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp testdata/cp-test.txt                                                | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4028937378/001/cp-test_ha-198246-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246:/home/docker/cp-test_ha-198246-m04_ha-198246.txt                       |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246 sudo cat                                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246.txt                                 |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m02:/home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m02 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03:/home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m03 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-198246 node stop m02 -v=7                                                     | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-198246 node start m02 -v=7                                                    | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:36 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-198246 -v=7                                                           | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-198246 -v=7                                                                | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-198246 --wait=true -v=7                                                    | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:39 UTC | 07 Aug 24 18:43 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-198246                                                                | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:43 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:39:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:39:10.703961   50940 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:39:10.704063   50940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:39:10.704074   50940 out.go:304] Setting ErrFile to fd 2...
	I0807 18:39:10.704080   50940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:39:10.704321   50940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:39:10.704903   50940 out.go:298] Setting JSON to false
	I0807 18:39:10.705810   50940 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8497,"bootTime":1723047454,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 18:39:10.705868   50940 start.go:139] virtualization: kvm guest
	I0807 18:39:10.708186   50940 out.go:177] * [ha-198246] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 18:39:10.709520   50940 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:39:10.709538   50940 notify.go:220] Checking for updates...
	I0807 18:39:10.712003   50940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:39:10.713396   50940 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:39:10.714731   50940 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:39:10.715948   50940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 18:39:10.717225   50940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:39:10.718787   50940 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:39:10.718904   50940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:39:10.719278   50940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:39:10.719351   50940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:39:10.733872   50940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42491
	I0807 18:39:10.734299   50940 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:39:10.734849   50940 main.go:141] libmachine: Using API Version  1
	I0807 18:39:10.734868   50940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:39:10.735149   50940 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:39:10.735301   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:39:10.771445   50940 out.go:177] * Using the kvm2 driver based on existing profile
	I0807 18:39:10.772781   50940 start.go:297] selected driver: kvm2
	I0807 18:39:10.772800   50940 start.go:901] validating driver "kvm2" against &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.150 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:39:10.772957   50940 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:39:10.773299   50940 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:39:10.773371   50940 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 18:39:10.789261   50940 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 18:39:10.789911   50940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:39:10.789973   50940 cni.go:84] Creating CNI manager for ""
	I0807 18:39:10.789984   50940 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0807 18:39:10.790037   50940 start.go:340] cluster config:
	{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.150 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:39:10.790185   50940 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:39:10.792195   50940 out.go:177] * Starting "ha-198246" primary control-plane node in "ha-198246" cluster
	I0807 18:39:10.793566   50940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:39:10.793603   50940 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 18:39:10.793610   50940 cache.go:56] Caching tarball of preloaded images
	I0807 18:39:10.793702   50940 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 18:39:10.793712   50940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 18:39:10.793820   50940 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:39:10.794024   50940 start.go:360] acquireMachinesLock for ha-198246: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:39:10.794065   50940 start.go:364] duration metric: took 22.799µs to acquireMachinesLock for "ha-198246"
	I0807 18:39:10.794079   50940 start.go:96] Skipping create...Using existing machine configuration
	I0807 18:39:10.794090   50940 fix.go:54] fixHost starting: 
	I0807 18:39:10.794381   50940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:39:10.794425   50940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:39:10.809066   50940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0807 18:39:10.809462   50940 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:39:10.809959   50940 main.go:141] libmachine: Using API Version  1
	I0807 18:39:10.809986   50940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:39:10.810308   50940 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:39:10.810495   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:39:10.810681   50940 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:39:10.812239   50940 fix.go:112] recreateIfNeeded on ha-198246: state=Running err=<nil>
	W0807 18:39:10.812270   50940 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 18:39:10.814145   50940 out.go:177] * Updating the running kvm2 "ha-198246" VM ...
	I0807 18:39:10.815433   50940 machine.go:94] provisionDockerMachine start ...
	I0807 18:39:10.815451   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:39:10.815630   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:10.817810   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:10.818187   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:10.818213   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:10.818335   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:10.818513   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:10.818654   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:10.818749   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:10.818901   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:39:10.819100   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:39:10.819115   50940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 18:39:10.925855   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246
	
	I0807 18:39:10.925879   50940 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:39:10.926078   50940 buildroot.go:166] provisioning hostname "ha-198246"
	I0807 18:39:10.926140   50940 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:39:10.926308   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:10.928840   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:10.929204   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:10.929237   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:10.929390   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:10.929562   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:10.929724   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:10.929880   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:10.930029   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:39:10.930205   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:39:10.930217   50940 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198246 && echo "ha-198246" | sudo tee /etc/hostname
	I0807 18:39:11.053216   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246
	
	I0807 18:39:11.053240   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:11.055783   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.056163   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.056191   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.056375   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:11.056558   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.056730   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.056872   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:11.057063   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:39:11.057246   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:39:11.057262   50940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198246/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:39:11.166536   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:39:11.166570   50940 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 18:39:11.166612   50940 buildroot.go:174] setting up certificates
	I0807 18:39:11.166625   50940 provision.go:84] configureAuth start
	I0807 18:39:11.166654   50940 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:39:11.166901   50940 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:39:11.169619   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.169944   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.169968   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.170103   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:11.171922   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.172247   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.172274   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.172424   50940 provision.go:143] copyHostCerts
	I0807 18:39:11.172454   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:39:11.172522   50940 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 18:39:11.172534   50940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:39:11.172630   50940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 18:39:11.172747   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:39:11.172773   50940 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 18:39:11.172782   50940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:39:11.172826   50940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 18:39:11.172918   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:39:11.172943   50940 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 18:39:11.172951   50940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:39:11.172980   50940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 18:39:11.173031   50940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.ha-198246 san=[127.0.0.1 192.168.39.196 ha-198246 localhost minikube]
	I0807 18:39:11.343149   50940 provision.go:177] copyRemoteCerts
	I0807 18:39:11.343209   50940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:39:11.343232   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:11.345780   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.346082   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.346106   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.346304   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:11.346476   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.346624   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:11.346732   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:39:11.433507   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 18:39:11.433590   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0807 18:39:11.466276   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 18:39:11.466358   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 18:39:11.502337   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 18:39:11.502412   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:39:11.534170   50940 provision.go:87] duration metric: took 367.53308ms to configureAuth
	I0807 18:39:11.534194   50940 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:39:11.534425   50940 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:39:11.534509   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:11.537345   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.537777   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.537807   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.537990   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:11.538146   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.538290   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.538520   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:11.538671   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:39:11.538819   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:39:11.538832   50940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 18:40:42.400613   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 18:40:42.400643   50940 machine.go:97] duration metric: took 1m31.585196452s to provisionDockerMachine
	I0807 18:40:42.400658   50940 start.go:293] postStartSetup for "ha-198246" (driver="kvm2")
	I0807 18:40:42.400671   50940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:40:42.400693   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.401072   50940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:40:42.401099   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.404010   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.404477   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.404504   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.404643   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.404845   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.405021   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.405173   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:40:42.490224   50940 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:40:42.494616   50940 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:40:42.494641   50940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 18:40:42.494695   50940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 18:40:42.494777   50940 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 18:40:42.494787   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 18:40:42.494880   50940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:40:42.504515   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:40:42.528517   50940 start.go:296] duration metric: took 127.843726ms for postStartSetup
	I0807 18:40:42.528575   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.528885   50940 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0807 18:40:42.528916   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.531653   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.532011   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.532033   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.532169   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.532357   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.532511   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.532684   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	W0807 18:40:42.615140   50940 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0807 18:40:42.615173   50940 fix.go:56] duration metric: took 1m31.821083908s for fixHost
	I0807 18:40:42.615216   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.617521   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.617867   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.617897   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.618041   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.618255   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.618460   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.618620   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.618763   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:40:42.618954   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:40:42.618968   50940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:40:42.720957   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723056042.683925571
	
	I0807 18:40:42.720977   50940 fix.go:216] guest clock: 1723056042.683925571
	I0807 18:40:42.720984   50940 fix.go:229] Guest: 2024-08-07 18:40:42.683925571 +0000 UTC Remote: 2024-08-07 18:40:42.615179881 +0000 UTC m=+91.947737851 (delta=68.74569ms)
	I0807 18:40:42.721007   50940 fix.go:200] guest clock delta is within tolerance: 68.74569ms
	I0807 18:40:42.721012   50940 start.go:83] releasing machines lock for "ha-198246", held for 1m31.926938457s
	I0807 18:40:42.721032   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.721329   50940 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:40:42.723792   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.724195   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.724240   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.724377   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.724857   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.725008   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.725089   50940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 18:40:42.725128   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.725228   50940 ssh_runner.go:195] Run: cat /version.json
	I0807 18:40:42.725251   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.727728   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.727874   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.728078   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.728105   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.728353   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.728389   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.728425   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.728514   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.728576   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.728654   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.728712   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.728759   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:40:42.728827   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.728971   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:40:42.831499   50940 ssh_runner.go:195] Run: systemctl --version
	I0807 18:40:42.838625   50940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 18:40:43.001017   50940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:40:43.011761   50940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:40:43.011846   50940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:40:43.021790   50940 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 18:40:43.021809   50940 start.go:495] detecting cgroup driver to use...
	I0807 18:40:43.021870   50940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:40:43.038892   50940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:40:43.052946   50940 docker.go:217] disabling cri-docker service (if available) ...
	I0807 18:40:43.053011   50940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 18:40:43.067629   50940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 18:40:43.082931   50940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 18:40:43.245782   50940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 18:40:43.399464   50940 docker.go:233] disabling docker service ...
	I0807 18:40:43.399546   50940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 18:40:43.417233   50940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 18:40:43.431474   50940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 18:40:43.579777   50940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 18:40:43.746564   50940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 18:40:43.761155   50940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:40:43.780543   50940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 18:40:43.780608   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.791780   50940 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 18:40:43.791856   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.802772   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.813558   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.824211   50940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:40:43.835548   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.847533   50940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.859249   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.870454   50940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:40:43.880638   50940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:40:43.890756   50940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:40:44.038924   50940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 18:40:47.981542   50940 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.942581083s)
	I0807 18:40:47.981573   50940 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 18:40:47.981627   50940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 18:40:47.989195   50940 start.go:563] Will wait 60s for crictl version
	I0807 18:40:47.989269   50940 ssh_runner.go:195] Run: which crictl
	I0807 18:40:47.993258   50940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:40:48.031869   50940 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 18:40:48.031936   50940 ssh_runner.go:195] Run: crio --version
	I0807 18:40:48.060771   50940 ssh_runner.go:195] Run: crio --version
	I0807 18:40:48.094518   50940 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 18:40:48.095835   50940 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:40:48.098609   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:48.098986   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:48.099017   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:48.099216   50940 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 18:40:48.104569   50940 kubeadm.go:883] updating cluster {Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.150 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 18:40:48.104704   50940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:40:48.104758   50940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:40:48.150334   50940 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 18:40:48.150361   50940 crio.go:433] Images already preloaded, skipping extraction
	I0807 18:40:48.150432   50940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:40:48.194374   50940 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 18:40:48.194398   50940 cache_images.go:84] Images are preloaded, skipping loading
	I0807 18:40:48.194410   50940 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0807 18:40:48.194561   50940 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:40:48.194648   50940 ssh_runner.go:195] Run: crio config
	I0807 18:40:48.247930   50940 cni.go:84] Creating CNI manager for ""
	I0807 18:40:48.247947   50940 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0807 18:40:48.247965   50940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 18:40:48.247995   50940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198246 NodeName:ha-198246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 18:40:48.248142   50940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-198246"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 18:40:48.248170   50940 kube-vip.go:115] generating kube-vip config ...
	I0807 18:40:48.248231   50940 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:40:48.260017   50940 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:40:48.260127   50940 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:40:48.260196   50940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:40:48.270430   50940 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 18:40:48.270492   50940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0807 18:40:48.280714   50940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0807 18:40:48.298345   50940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:40:48.317202   50940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0807 18:40:48.335643   50940 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:40:48.353957   50940 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:40:48.359383   50940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:40:48.511387   50940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:40:48.526457   50940 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246 for IP: 192.168.39.196
	I0807 18:40:48.526483   50940 certs.go:194] generating shared ca certs ...
	I0807 18:40:48.526498   50940 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:40:48.526666   50940 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 18:40:48.526718   50940 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 18:40:48.526729   50940 certs.go:256] generating profile certs ...
	I0807 18:40:48.526822   50940 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key
	I0807 18:40:48.526874   50940 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.50faae22
	I0807 18:40:48.526908   50940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.50faae22 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.251 192.168.39.227 192.168.39.254]
	I0807 18:40:48.653522   50940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.50faae22 ...
	I0807 18:40:48.653551   50940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.50faae22: {Name:mk0466195f8efb396bd8881926e4f02164fcccd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:40:48.653717   50940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.50faae22 ...
	I0807 18:40:48.653728   50940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.50faae22: {Name:mk40794fd88475757a06d369c33f0c55f282e3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:40:48.653794   50940 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.50faae22 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt
	I0807 18:40:48.653953   50940 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.50faae22 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key
	I0807 18:40:48.654082   50940 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key
	I0807 18:40:48.654096   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:40:48.654109   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:40:48.654122   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:40:48.654133   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:40:48.654151   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:40:48.654160   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:40:48.654177   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:40:48.654188   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:40:48.654243   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 18:40:48.654272   50940 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 18:40:48.654278   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 18:40:48.654297   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 18:40:48.654315   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 18:40:48.654334   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 18:40:48.654371   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:40:48.654395   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 18:40:48.654409   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:40:48.654420   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 18:40:48.654987   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:40:48.682808   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:40:48.709144   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:40:48.734619   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 18:40:48.759348   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0807 18:40:48.784431   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 18:40:48.807829   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:40:48.832116   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 18:40:48.855849   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 18:40:48.879869   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:40:48.904829   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 18:40:48.929080   50940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 18:40:48.946022   50940 ssh_runner.go:195] Run: openssl version
	I0807 18:40:48.952109   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 18:40:48.963428   50940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 18:40:48.967946   50940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 18:40:48.967994   50940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 18:40:48.973699   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:40:48.984349   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:40:48.996437   50940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:40:49.001131   50940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:40:49.001192   50940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:40:49.006999   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:40:49.017011   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 18:40:49.028071   50940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 18:40:49.032454   50940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 18:40:49.032493   50940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 18:40:49.038275   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 18:40:49.048034   50940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:40:49.052709   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 18:40:49.058418   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 18:40:49.064004   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 18:40:49.069490   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 18:40:49.075292   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 18:40:49.081431   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 18:40:49.087223   50940 kubeadm.go:392] StartCluster: {Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.150 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:40:49.087330   50940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 18:40:49.087373   50940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 18:40:49.126348   50940 cri.go:89] found id: "ed4c5b7171a2e8de6e5c1692ca76f0a6cfd914813c567f16ac99ae2bc9e3bb6c"
	I0807 18:40:49.126369   50940 cri.go:89] found id: "b50bfdb91d10f8e89577e5d8b828877a309b9d44954f8e2e68d0522e801195dd"
	I0807 18:40:49.126374   50940 cri.go:89] found id: "0d5e41e989cec274969ba0eb8704ee50e0e5fe8adcfb6c56802de78ff130e1f1"
	I0807 18:40:49.126379   50940 cri.go:89] found id: "806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab"
	I0807 18:40:49.126383   50940 cri.go:89] found id: "3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7"
	I0807 18:40:49.126387   50940 cri.go:89] found id: "93fcff9b17b4b2366750c04f15288dda856a885fa1e95d4510a83b2b14b855a9"
	I0807 18:40:49.126390   50940 cri.go:89] found id: "5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554"
	I0807 18:40:49.126393   50940 cri.go:89] found id: "c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8"
	I0807 18:40:49.126396   50940 cri.go:89] found id: "305290711d5443ffae9e64678e692b52bbffed39cc06b059026f167d97c5e98d"
	I0807 18:40:49.126404   50940 cri.go:89] found id: "4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473"
	I0807 18:40:49.126412   50940 cri.go:89] found id: "2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56"
	I0807 18:40:49.126416   50940 cri.go:89] found id: "981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5"
	I0807 18:40:49.126420   50940 cri.go:89] found id: "6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36"
	I0807 18:40:49.126424   50940 cri.go:89] found id: ""
	I0807 18:40:49.126469   50940 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.056437196Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723056192056411744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97490b04-8abe-47da-94ca-444c5415dddd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.057231998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ee0173e-2e21-4605-910a-c71d18c49d0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.057306967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ee0173e-2e21-4605-910a-c71d18c49d0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.057779736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:009d486f82ea09a17ebb956c9c6ca314f1f09fe766880c724c94eee5ed5ffed2,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723056133751598650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723056097750099102,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723056092756499315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6cd08615618bd421596f6704986267a03b6696730326d0f074ea53c6defb67,PodSandboxId:5598e77b3f2c98a5310ffd7a165baf49471b49b26d94d5397ff412b61aa28b05,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723056088028307174,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723056087750662099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f540fc3d24fc8f24e10ddae759919e3a36c0baac2084537558d55dceebb3b76,PodSandboxId:d4e80fa25c9af7ef7f9c9295e77fd2a2d64cca566b6decb508355c6e1eb48a1f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723056068972327525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362cdc9ecf03b90e08cef0c047f19044,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8,PodSandboxId:b016288ef11234d8583ea6583176fb4c980dbf49174a7180a5a716e0ae08c65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723056054697353163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f,PodSandboxId:a1d7d3fd1da9859c4278323824cdcdcba51679e18b2f77294ec98551b82967b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054995536785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453,PodSandboxId:5ac81bf00a7a3ecace9394a3c9e8fe7d15d5ef9a8dd649175bc77f8bbd10d87d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723056054889341435,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132,PodSandboxId:a833ec31c33bb629b83ddeca118e07e39c7927c311d69a90df4f5fe625a43aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054794120846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723056054750542424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c,PodSandboxId:45b19adfcff0198c46fdf30fbf9abe633afd8cffc4810c959d0b299a53f41c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723056054633792484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723056054556133959,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548,PodSandboxId:2667de827b56002939350a63d286aa36384dce92ca959f827a81fc71ca8faba3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723056054564748960,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Anno
tations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723055519101632485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annota
tions:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319342523480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kuber
netes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319067104704,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723055306768392881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723055302363401299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723055280563943121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723055280588857214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ee0173e-2e21-4605-910a-c71d18c49d0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.083116541Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=797fc1f6-a4ec-4830-885e-e698aa74bb4e name=/runtime.v1.RuntimeService/Status
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.083223976Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=797fc1f6-a4ec-4830-885e-e698aa74bb4e name=/runtime.v1.RuntimeService/Status
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.086295424Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1d852308-b4fb-4522-a48a-61d0bdffd7fd name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.086868412Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5598e77b3f2c98a5310ffd7a165baf49471b49b26d94d5397ff412b61aa28b05,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-chh26,Uid:42848aea-5e18-4f5c-b59d-f615d5128a74,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723056087904622519,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-07T18:31:55.870264799Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4e80fa25c9af7ef7f9c9295e77fd2a2d64cca566b6decb508355c6e1eb48a1f,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-198246,Uid:362cdc9ecf03b90e08cef0c047f19044,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1723056068863324557,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362cdc9ecf03b90e08cef0c047f19044,},Annotations:map[string]string{kubernetes.io/config.hash: 362cdc9ecf03b90e08cef0c047f19044,kubernetes.io/config.seen: 2024-08-07T18:40:48.317890276Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b016288ef11234d8583ea6583176fb4c980dbf49174a7180a5a716e0ae08c65f,Metadata:&PodSandboxMetadata{Name:kube-proxy-4l79v,Uid:649e12b4-4e77-48a9-af9c-691694c4ec99,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723056054234977133,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/
config.seen: 2024-08-07T18:28:20.456193111Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a1d7d3fd1da9859c4278323824cdcdcba51679e18b2f77294ec98551b82967b0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbnrx,Uid:96fa387b-f93b-40df-9ed6-78834f3d02df,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723056054226607026,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-07T18:28:38.542187868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ac81bf00a7a3ecace9394a3c9e8fe7d15d5ef9a8dd649175bc77f8bbd10d87d,Metadata:&PodSandboxMetadata{Name:kindnet-sgl8v,Uid:574aa453-48ef-44ff-b10a-13142fc8cf7f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723056054216522795,Labels:map
[string]string{app: kindnet,controller-revision-hash: 7c6d997646,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-07T18:28:20.468552308Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:88457253-9aa8-4bd7-974f-1b47b341d40c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723056054211709783,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{kubectl.kube
rnetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-07T18:28:38.537412790Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-198246,Uid:b12d62604f0b70faa552e6c44d8cd532,Namespace:kube-system,Attemp
t:1,},State:SANDBOX_READY,CreatedAt:1723056054208085222,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b12d62604f0b70faa552e6c44d8cd532,kubernetes.io/config.seen: 2024-08-07T18:28:06.683419616Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a833ec31c33bb629b83ddeca118e07e39c7927c311d69a90df4f5fe625a43aa6,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-w6w6g,Uid:143456ef-ffd1-4d42-b9d0-6b778094eca5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723056054207726828,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,k8s-app: kube-dns,pod-
template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-07T18:28:38.545009035Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2667de827b56002939350a63d286aa36384dce92ca959f827a81fc71ca8faba3,Metadata:&PodSandboxMetadata{Name:etcd-ha-198246,Uid:c60b0b92792ae1d5ba11a7a2e649f612,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723056054179306788,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.196:2379,kubernetes.io/config.hash: c60b0b92792ae1d5ba11a7a2e649f612,kubernetes.io/config.seen: 2024-08-07T18:28:06.683422991Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&Po
dSandboxMetadata{Name:kube-apiserver-ha-198246,Uid:b2b91906fc54e8232161e687fc4a9af5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723056054157001277,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.196:8443,kubernetes.io/config.hash: b2b91906fc54e8232161e687fc4a9af5,kubernetes.io/config.seen: 2024-08-07T18:28:06.683416297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:45b19adfcff0198c46fdf30fbf9abe633afd8cffc4810c959d0b299a53f41c87,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-198246,Uid:56b90546fb511b52cb0b98695e572bae,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723056054128638175,Labels:map[string]string{component: kube-scheduler,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 56b90546fb511b52cb0b98695e572bae,kubernetes.io/config.seen: 2024-08-07T18:28:06.683420907Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-chh26,Uid:42848aea-5e18-4f5c-b59d-f615d5128a74,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723055516206059454,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-07T18:31:55.870264799Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-w6w6g,Uid:143456ef-ffd1-4d42-b9d0-6b778094eca5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723055319156041870,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-07T18:28:38.545009035Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbnrx,Uid:96fa387b-f93b-40df-9ed6-78834f3d02df,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723055318852787005,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-07T18:28:38.542187868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&PodSandboxMetadata{Name:kube-proxy-4l79v,Uid:649e12b4-4e77-48a9-af9c-691694c4ec99,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723055302263005104,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-07T18:28:20.456193111Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&PodSandboxMetadata{Name:kindnet-sgl8v,Uid:574aa453-48ef-44ff-b10a-13142fc8cf7f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723055301680579966,Labels:map[string]string{app: kindnet,controller-revision-hash: 7c6d997646,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-07T18:28:20.468552308Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-198246,Uid:56b90546fb511b52cb0b98695e572bae,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723055280305726993,Labels:map[string]string{component: kube-scheduler,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 56b90546fb511b52cb0b98695e572bae,kubernetes.io/config.seen: 2024-08-07T18:27:59.844707444Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&PodSandboxMetadata{Name:etcd-ha-198246,Uid:c60b0b92792ae1d5ba11a7a2e649f612,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723055280302410119,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.196:2379,kubernetes.io/config.hash: c60b0b92
792ae1d5ba11a7a2e649f612,kubernetes.io/config.seen: 2024-08-07T18:27:59.844709601Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1d852308-b4fb-4522-a48a-61d0bdffd7fd name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.088115605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66082192-7017-4fa2-9b2c-0a57b048dbcc name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.088205376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66082192-7017-4fa2-9b2c-0a57b048dbcc name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.088887503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:009d486f82ea09a17ebb956c9c6ca314f1f09fe766880c724c94eee5ed5ffed2,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723056133751598650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723056097750099102,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723056092756499315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6cd08615618bd421596f6704986267a03b6696730326d0f074ea53c6defb67,PodSandboxId:5598e77b3f2c98a5310ffd7a165baf49471b49b26d94d5397ff412b61aa28b05,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723056088028307174,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723056087750662099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f540fc3d24fc8f24e10ddae759919e3a36c0baac2084537558d55dceebb3b76,PodSandboxId:d4e80fa25c9af7ef7f9c9295e77fd2a2d64cca566b6decb508355c6e1eb48a1f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723056068972327525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362cdc9ecf03b90e08cef0c047f19044,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8,PodSandboxId:b016288ef11234d8583ea6583176fb4c980dbf49174a7180a5a716e0ae08c65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723056054697353163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f,PodSandboxId:a1d7d3fd1da9859c4278323824cdcdcba51679e18b2f77294ec98551b82967b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054995536785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453,PodSandboxId:5ac81bf00a7a3ecace9394a3c9e8fe7d15d5ef9a8dd649175bc77f8bbd10d87d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723056054889341435,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132,PodSandboxId:a833ec31c33bb629b83ddeca118e07e39c7927c311d69a90df4f5fe625a43aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054794120846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723056054750542424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c,PodSandboxId:45b19adfcff0198c46fdf30fbf9abe633afd8cffc4810c959d0b299a53f41c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723056054633792484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723056054556133959,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548,PodSandboxId:2667de827b56002939350a63d286aa36384dce92ca959f827a81fc71ca8faba3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723056054564748960,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Anno
tations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723055519101632485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annota
tions:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319342523480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kuber
netes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319067104704,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723055306768392881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723055302363401299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723055280563943121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723055280588857214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66082192-7017-4fa2-9b2c-0a57b048dbcc name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.117844951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c508114-bc0f-460f-a55b-336875899a7e name=/runtime.v1.RuntimeService/Version
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.117923769Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c508114-bc0f-460f-a55b-336875899a7e name=/runtime.v1.RuntimeService/Version
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.119590186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb0be3a7-032b-480b-93cf-bf5532ac4598 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.120082827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723056192120056878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb0be3a7-032b-480b-93cf-bf5532ac4598 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.120886161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1554560e-5c0d-4fc2-ab9f-eff455de05c8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.120962561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1554560e-5c0d-4fc2-ab9f-eff455de05c8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.121502570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:009d486f82ea09a17ebb956c9c6ca314f1f09fe766880c724c94eee5ed5ffed2,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723056133751598650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723056097750099102,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723056092756499315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6cd08615618bd421596f6704986267a03b6696730326d0f074ea53c6defb67,PodSandboxId:5598e77b3f2c98a5310ffd7a165baf49471b49b26d94d5397ff412b61aa28b05,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723056088028307174,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723056087750662099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f540fc3d24fc8f24e10ddae759919e3a36c0baac2084537558d55dceebb3b76,PodSandboxId:d4e80fa25c9af7ef7f9c9295e77fd2a2d64cca566b6decb508355c6e1eb48a1f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723056068972327525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362cdc9ecf03b90e08cef0c047f19044,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8,PodSandboxId:b016288ef11234d8583ea6583176fb4c980dbf49174a7180a5a716e0ae08c65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723056054697353163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f,PodSandboxId:a1d7d3fd1da9859c4278323824cdcdcba51679e18b2f77294ec98551b82967b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054995536785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453,PodSandboxId:5ac81bf00a7a3ecace9394a3c9e8fe7d15d5ef9a8dd649175bc77f8bbd10d87d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723056054889341435,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132,PodSandboxId:a833ec31c33bb629b83ddeca118e07e39c7927c311d69a90df4f5fe625a43aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054794120846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723056054750542424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c,PodSandboxId:45b19adfcff0198c46fdf30fbf9abe633afd8cffc4810c959d0b299a53f41c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723056054633792484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723056054556133959,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548,PodSandboxId:2667de827b56002939350a63d286aa36384dce92ca959f827a81fc71ca8faba3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723056054564748960,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Anno
tations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723055519101632485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annota
tions:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319342523480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kuber
netes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319067104704,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723055306768392881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723055302363401299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723055280563943121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723055280588857214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1554560e-5c0d-4fc2-ab9f-eff455de05c8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.179290305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b4e13ef-e031-473b-9219-d2529dafe9fa name=/runtime.v1.RuntimeService/Version
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.179427078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b4e13ef-e031-473b-9219-d2529dafe9fa name=/runtime.v1.RuntimeService/Version
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.180946669Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1845d696-523e-459f-8454-f65216bf34fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.181391543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723056192181370908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1845d696-523e-459f-8454-f65216bf34fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.182042631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6433a11-2ef3-4110-8a17-d20d2ba43e84 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.182114146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6433a11-2ef3-4110-8a17-d20d2ba43e84 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:43:12 ha-198246 crio[3742]: time="2024-08-07 18:43:12.182608244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:009d486f82ea09a17ebb956c9c6ca314f1f09fe766880c724c94eee5ed5ffed2,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723056133751598650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723056097750099102,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723056092756499315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6cd08615618bd421596f6704986267a03b6696730326d0f074ea53c6defb67,PodSandboxId:5598e77b3f2c98a5310ffd7a165baf49471b49b26d94d5397ff412b61aa28b05,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723056088028307174,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723056087750662099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f540fc3d24fc8f24e10ddae759919e3a36c0baac2084537558d55dceebb3b76,PodSandboxId:d4e80fa25c9af7ef7f9c9295e77fd2a2d64cca566b6decb508355c6e1eb48a1f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723056068972327525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362cdc9ecf03b90e08cef0c047f19044,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8,PodSandboxId:b016288ef11234d8583ea6583176fb4c980dbf49174a7180a5a716e0ae08c65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723056054697353163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f,PodSandboxId:a1d7d3fd1da9859c4278323824cdcdcba51679e18b2f77294ec98551b82967b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054995536785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453,PodSandboxId:5ac81bf00a7a3ecace9394a3c9e8fe7d15d5ef9a8dd649175bc77f8bbd10d87d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723056054889341435,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132,PodSandboxId:a833ec31c33bb629b83ddeca118e07e39c7927c311d69a90df4f5fe625a43aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054794120846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723056054750542424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c,PodSandboxId:45b19adfcff0198c46fdf30fbf9abe633afd8cffc4810c959d0b299a53f41c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723056054633792484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723056054556133959,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548,PodSandboxId:2667de827b56002939350a63d286aa36384dce92ca959f827a81fc71ca8faba3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723056054564748960,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Anno
tations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723055519101632485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annota
tions:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319342523480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kuber
netes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319067104704,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723055306768392881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723055302363401299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723055280563943121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723055280588857214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6433a11-2ef3-4110-8a17-d20d2ba43e84 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	009d486f82ea0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      58 seconds ago       Running             storage-provisioner       4                   6fc362f9e3c6e       storage-provisioner
	c98757fe8dd8c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   60563652ff3ff       kube-apiserver-ha-198246
	52694c1332778       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   384a81ba0d97c       kube-controller-manager-ha-198246
	ac6cd08615618       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   5598e77b3f2c9       busybox-fc5497c4f-chh26
	0336639d7a74d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   6fc362f9e3c6e       storage-provisioner
	9f540fc3d24fc       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   d4e80fa25c9af       kube-vip-ha-198246
	cf1befd19e1e6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   a1d7d3fd1da98       coredns-7db6d8ff4d-rbnrx
	d7cbe0ad607e5       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      2 minutes ago        Running             kindnet-cni               1                   5ac81bf00a7a3       kindnet-sgl8v
	f03c4cb552619       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   a833ec31c33bb       coredns-7db6d8ff4d-w6w6g
	a9e99c8b34ca1       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   384a81ba0d97c       kube-controller-manager-ha-198246
	1ceccc741c65b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   b016288ef1123       kube-proxy-4l79v
	c570124d66270       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   45b19adfcff01       kube-scheduler-ha-198246
	3b11723f44266       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   2667de827b560       etcd-ha-198246
	bef4b4746f9f5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   60563652ff3ff       kube-apiserver-ha-198246
	80335e9819afd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   4d0990efdcee8       busybox-fc5497c4f-chh26
	806c3ba54cd9b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   a5394b2f1434b       coredns-7db6d8ff4d-w6w6g
	3f9784c457acb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   b57adade6ea15       coredns-7db6d8ff4d-rbnrx
	5433090bdddca       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    14 minutes ago       Exited              kindnet-cni               0                   bd5d340b4a584       kindnet-sgl8v
	c6c6220e1a7fb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago       Exited              kube-proxy                0                   4aec116af531d       kube-proxy-4l79v
	2ff4075c05c48       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      15 minutes ago       Exited              kube-scheduler            0                   7c56ff7ba09a0       kube-scheduler-ha-198246
	981dfd0662596       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   0e8285057cc05       etcd-ha-198246
	
	
	==> coredns [3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7] <==
	[INFO] 10.244.0.4:41062 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090569s
	[INFO] 10.244.0.4:45221 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159605s
	[INFO] 10.244.0.4:52919 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008416s
	[INFO] 10.244.2.2:57336 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947478s
	[INFO] 10.244.2.2:58778 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148421s
	[INFO] 10.244.2.2:40534 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094901s
	[INFO] 10.244.2.2:34562 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001435891s
	[INFO] 10.244.2.2:40255 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066647s
	[INFO] 10.244.2.2:33303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074642s
	[INFO] 10.244.2.2:54865 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065816s
	[INFO] 10.244.1.2:56362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135028s
	[INFO] 10.244.1.2:50486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103508s
	[INFO] 10.244.0.4:60915 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079398s
	[INFO] 10.244.2.2:36331 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189607s
	[INFO] 10.244.1.2:44020 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000226665s
	[INFO] 10.244.1.2:47459 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129465s
	[INFO] 10.244.0.4:59992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000059798s
	[INFO] 10.244.0.4:55811 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139124s
	[INFO] 10.244.2.2:42718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132316s
	[INFO] 10.244.2.2:34338 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147334s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab] <==
	[INFO] 10.244.1.2:39185 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003274854s
	[INFO] 10.244.1.2:32995 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301562s
	[INFO] 10.244.1.2:57764 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00324711s
	[INFO] 10.244.0.4:43175 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001969935s
	[INFO] 10.244.0.4:47947 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090373s
	[INFO] 10.244.2.2:59435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185691s
	[INFO] 10.244.1.2:41342 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215074s
	[INFO] 10.244.1.2:58323 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133762s
	[INFO] 10.244.0.4:48395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131554s
	[INFO] 10.244.0.4:33157 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121525s
	[INFO] 10.244.0.4:53506 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084053s
	[INFO] 10.244.2.2:47826 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205944s
	[INFO] 10.244.2.2:43418 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113361s
	[INFO] 10.244.2.2:53197 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103281s
	[INFO] 10.244.1.2:51874 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001263s
	[INFO] 10.244.1.2:40094 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205313s
	[INFO] 10.244.0.4:55591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001033s
	[INFO] 10.244.0.4:41281 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083191s
	[INFO] 10.244.2.2:52214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093799s
	[INFO] 10.244.2.2:55578 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146065s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49806->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49806->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-198246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T18_28_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:28:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:43:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:41:42 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:41:42 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:41:42 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:41:42 +0000   Wed, 07 Aug 2024 18:28:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    ha-198246
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e31604902e0745d1a1407795d2ccbfc0
	  System UUID:                e3160490-2e07-45d1-a140-7795d2ccbfc0
	  Boot ID:                    9b0f1850-84af-432c-85c0-f24cda670347
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-chh26              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-rbnrx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-w6w6g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-198246                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-sgl8v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-198246             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-198246    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-4l79v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-198246             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-198246                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 96s   kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m   kubelet          Node ha-198246 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m   kubelet          Node ha-198246 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m   kubelet          Node ha-198246 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal   NodeReady                14m   kubelet          Node ha-198246 status is now: NodeReady
	  Normal   RegisteredNode           12m   node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal   RegisteredNode           11m   node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Warning  ContainerGCFailed        3m6s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           87s   node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal   RegisteredNode           81s   node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal   RegisteredNode           30s   node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	
	
	Name:               ha-198246-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_30_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:30:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:43:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:42:12 +0000   Wed, 07 Aug 2024 18:41:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:42:12 +0000   Wed, 07 Aug 2024 18:41:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:42:12 +0000   Wed, 07 Aug 2024 18:41:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:42:12 +0000   Wed, 07 Aug 2024 18:41:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-198246-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8eadf45fa3a45c1ace8b37287f97c9d
	  System UUID:                b8eadf45-fa3a-45c1-ace8-b37287f97c9d
	  Boot ID:                    20778be6-5f4b-49db-b89c-1662c1afc9ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g62d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-198246-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-8x6fj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-198246-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-198246-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-m5ng2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-198246-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-198246-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 13m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)    kubelet          Node ha-198246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)    kubelet          Node ha-198246-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)    kubelet          Node ha-198246-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  NodeNotReady             8m58s                node-controller  Node ha-198246-m02 status is now: NodeNotReady
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m1s (x8 over 2m1s)  kubelet          Node ha-198246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s (x8 over 2m1s)  kubelet          Node ha-198246-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node ha-198246-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           87s                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           81s                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           30s                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	
	
	Name:               ha-198246-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_31_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:31:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:43:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:42:48 +0000   Wed, 07 Aug 2024 18:31:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:42:48 +0000   Wed, 07 Aug 2024 18:31:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:42:48 +0000   Wed, 07 Aug 2024 18:31:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:42:48 +0000   Wed, 07 Aug 2024 18:31:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-198246-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60409ac81f5346078f5f2d7599678540
	  System UUID:                60409ac8-1f53-4607-8f5f-2d7599678540
	  Boot ID:                    f7dce993-4040-4a27-b5f7-0055771c46aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k2t25                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-198246-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-7854s                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-198246-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-198246-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-7mttr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-198246-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-198246-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 38s                kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-198246-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-198246-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-198246-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           11m                node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	  Normal   RegisteredNode           81s                node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  55s                kubelet          Node ha-198246-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s                kubelet          Node ha-198246-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s                kubelet          Node ha-198246-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 55s                kubelet          Node ha-198246-m03 has been rebooted, boot id: f7dce993-4040-4a27-b5f7-0055771c46aa
	  Normal   RegisteredNode           30s                node-controller  Node ha-198246-m03 event: Registered Node ha-198246-m03 in Controller
	
	
	Name:               ha-198246-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_32_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:32:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:43:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:43:04 +0000   Wed, 07 Aug 2024 18:43:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:43:04 +0000   Wed, 07 Aug 2024 18:43:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:43:04 +0000   Wed, 07 Aug 2024 18:43:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:43:04 +0000   Wed, 07 Aug 2024 18:43:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-198246-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e050b6016e8b45679acbdd2b5c7bde62
	  System UUID:                e050b601-6e8b-4567-9acb-dd2b5c7bde62
	  Boot ID:                    5d8bf446-d965-45d0-b8f9-22abbef3d3d9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5vj44       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-5ggpl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-198246-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-198246-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-198246-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   NodeReady                9m51s              kubelet          Node ha-198246-m04 status is now: NodeReady
	  Normal   RegisteredNode           87s                node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   RegisteredNode           81s                node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   NodeNotReady             47s                node-controller  Node ha-198246-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-198246-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-198246-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-198246-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-198246-m04 has been rebooted, boot id: 5d8bf446-d965-45d0-b8f9-22abbef3d3d9
	  Normal   NodeReady                8s                 kubelet          Node ha-198246-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.057949] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071605] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.183672] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.110780] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.300871] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.248154] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.501138] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.062750] kauditd_printk_skb: 158 callbacks suppressed
	[Aug 7 18:28] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.095778] kauditd_printk_skb: 79 callbacks suppressed
	[ +15.277376] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.193932] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 7 18:30] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 7 18:40] systemd-fstab-generator[3660]: Ignoring "noauto" option for root device
	[  +0.164157] systemd-fstab-generator[3672]: Ignoring "noauto" option for root device
	[  +0.182599] systemd-fstab-generator[3686]: Ignoring "noauto" option for root device
	[  +0.155401] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.298938] systemd-fstab-generator[3726]: Ignoring "noauto" option for root device
	[  +4.468694] systemd-fstab-generator[3831]: Ignoring "noauto" option for root device
	[  +0.093451] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.661157] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:41] kauditd_printk_skb: 86 callbacks suppressed
	[ +10.168055] kauditd_printk_skb: 1 callbacks suppressed
	[ +15.835526] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.736776] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548] <==
	{"level":"warn","ts":"2024-08-07T18:42:11.863253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:42:11.952144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a14f9258d3b66c75","from":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-07T18:42:12.018838Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.227:2380/version","remote-member-id":"8d69f1f11485af9","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-07T18:42:12.01902Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8d69f1f11485af9","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-07T18:42:15.670808Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8d69f1f11485af9","rtt":"0s","error":"dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-07T18:42:15.671076Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8d69f1f11485af9","rtt":"0s","error":"dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-07T18:42:16.021021Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.227:2380/version","remote-member-id":"8d69f1f11485af9","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-07T18:42:16.021061Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8d69f1f11485af9","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-07T18:42:20.023142Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.227:2380/version","remote-member-id":"8d69f1f11485af9","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-07T18:42:20.023276Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"8d69f1f11485af9","error":"Get \"https://192.168.39.227:2380/version\": dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-07T18:42:20.671523Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"8d69f1f11485af9","rtt":"0s","error":"dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-07T18:42:20.671569Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"8d69f1f11485af9","rtt":"0s","error":"dial tcp 192.168.39.227:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-07T18:42:22.408559Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:42:22.409261Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:42:22.409981Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:42:22.44034Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a14f9258d3b66c75","to":"8d69f1f11485af9","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-07T18:42:22.440406Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:42:22.451127Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a14f9258d3b66c75","to":"8d69f1f11485af9","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-07T18:42:22.451171Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:42:26.935326Z","caller":"traceutil/trace.go:171","msg":"trace[15955504] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2553; }","duration":"165.270774ms","start":"2024-08-07T18:42:26.770038Z","end":"2024-08-07T18:42:26.935308Z","steps":["trace[15955504] 'process raft request'  (duration: 165.239129ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:42:26.936767Z","caller":"traceutil/trace.go:171","msg":"trace[1652254745] linearizableReadLoop","detail":"{readStateIndex:2991; appliedIndex:2994; }","duration":"157.339424ms","start":"2024-08-07T18:42:26.779407Z","end":"2024-08-07T18:42:26.936746Z","steps":["trace[1652254745] 'read index received'  (duration: 157.335572ms)","trace[1652254745] 'applied index is now lower than readState.Index'  (duration: 2.775µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T18:42:26.938531Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.991838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-w6w6g\" ","response":"range_response_count:1 size:5088"}
	{"level":"info","ts":"2024-08-07T18:42:26.939358Z","caller":"traceutil/trace.go:171","msg":"trace[561683362] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-w6w6g; range_end:; response_count:1; response_revision:2553; }","duration":"159.98222ms","start":"2024-08-07T18:42:26.779355Z","end":"2024-08-07T18:42:26.939337Z","steps":["trace[561683362] 'agreement among raft nodes before linearized reading'  (duration: 158.92817ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:42:26.939717Z","caller":"traceutil/trace.go:171","msg":"trace[254684823] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2554; }","duration":"169.480685ms","start":"2024-08-07T18:42:26.77022Z","end":"2024-08-07T18:42:26.939701Z","steps":["trace[254684823] 'process raft request'  (duration: 168.580094ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:42:26.938914Z","caller":"traceutil/trace.go:171","msg":"trace[1987519294] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2554; }","duration":"165.708781ms","start":"2024-08-07T18:42:26.773192Z","end":"2024-08-07T18:42:26.938901Z","steps":["trace[1987519294] 'process raft request'  (duration: 165.635719ms)"],"step_count":1}
	
	
	==> etcd [981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5] <==
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-07T18:39:11.930241Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7815312355546630082,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-07T18:39:11.969251Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a14f9258d3b66c75","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-07T18:39:11.969664Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.969743Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.969788Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.969917Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.969977Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.970108Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.97016Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.970184Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970212Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970256Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970361Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970416Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970528Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970544Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.973405Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-08-07T18:39:11.973569Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-08-07T18:39:11.973595Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-198246","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"]}
	
	
	==> kernel <==
	 18:43:12 up 15 min,  0 users,  load average: 0.46, 0.52, 0.35
	Linux ha-198246 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554] <==
	I0807 18:38:48.091648       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:38:48.091771       1 main.go:299] handling current node
	I0807 18:38:48.091821       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:38:48.091848       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:38:48.092071       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:38:48.092105       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:38:48.092190       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:38:48.092216       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:38:58.091080       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:38:58.091321       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:38:58.091589       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:38:58.091621       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:38:58.091694       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:38:58.091714       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:38:58.091785       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:38:58.091804       1 main.go:299] handling current node
	I0807 18:39:08.099724       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:39:08.099917       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:39:08.100125       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:39:08.100153       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:39:08.100270       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:39:08.100291       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:39:08.100346       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:39:08.100364       1 main.go:299] handling current node
	E0807 18:39:09.959670       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453] <==
	I0807 18:42:36.014930       1 main.go:299] handling current node
	I0807 18:42:46.011867       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:42:46.012018       1 main.go:299] handling current node
	I0807 18:42:46.012070       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:42:46.012100       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:42:46.012305       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:42:46.012367       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:42:46.012624       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:42:46.012700       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:42:56.010515       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:42:56.010647       1 main.go:299] handling current node
	I0807 18:42:56.010675       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:42:56.010692       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:42:56.010879       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:42:56.010905       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:42:56.010980       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:42:56.011000       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:43:06.010583       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:43:06.010806       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:43:06.011308       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:43:06.011382       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:43:06.011617       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:43:06.011663       1 main.go:299] handling current node
	I0807 18:43:06.011714       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:43:06.011737       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d] <==
	I0807 18:40:55.220414       1 options.go:221] external host was not specified, using 192.168.39.196
	I0807 18:40:55.221402       1 server.go:148] Version: v1.30.3
	I0807 18:40:55.221544       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:40:55.885422       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0807 18:40:55.908598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 18:40:55.919327       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0807 18:40:55.919418       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0807 18:40:55.919747       1 instance.go:299] Using reconciler: lease
	W0807 18:41:15.884207       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0807 18:41:15.884360       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0807 18:41:15.920844       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	W0807 18:41:15.920878       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2] <==
	I0807 18:41:39.709779       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0807 18:41:39.710215       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0807 18:41:39.710425       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 18:41:39.778749       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0807 18:41:39.787151       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 18:41:39.787192       1 policy_source.go:224] refreshing policies
	I0807 18:41:39.800020       1 shared_informer.go:320] Caches are synced for configmaps
	I0807 18:41:39.803027       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 18:41:39.805345       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0807 18:41:39.805411       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0807 18:41:39.806972       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0807 18:41:39.821935       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 18:41:39.825287       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0807 18:41:39.825736       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0807 18:41:39.826683       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0807 18:41:39.826942       1 aggregator.go:165] initial CRD sync complete...
	I0807 18:41:39.827026       1 autoregister_controller.go:141] Starting autoregister controller
	I0807 18:41:39.827053       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0807 18:41:39.827076       1 cache.go:39] Caches are synced for autoregister controller
	W0807 18:41:39.970663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227 192.168.39.251]
	I0807 18:41:39.971940       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 18:41:39.977766       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0807 18:41:39.983275       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0807 18:41:40.709242       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0807 18:41:41.000926       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.196 192.168.39.251]
	
	
	==> kube-controller-manager [52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b] <==
	I0807 18:41:51.948122       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198246-m04"
	I0807 18:41:51.948160       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-198246"
	I0807 18:41:51.948405       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0807 18:41:51.953294       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 18:41:51.966013       1 shared_informer.go:320] Caches are synced for cronjob
	I0807 18:41:51.966729       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 18:41:52.417876       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 18:41:52.459286       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 18:41:52.459372       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0807 18:41:52.821573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="168.117µs"
	I0807 18:41:58.111754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.022268ms"
	I0807 18:41:58.112050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="149.081µs"
	I0807 18:42:18.448768       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.122117ms"
	I0807 18:42:18.448963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.883µs"
	I0807 18:42:18.924345       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tqv4l EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tqv4l\": the object has been modified; please apply your changes to the latest version and try again"
	I0807 18:42:18.924734       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e943e6d5-492a-4b17-b13e-1f19556376b7", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tqv4l EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tqv4l": the object has been modified; please apply your changes to the latest version and try again
	I0807 18:42:18.927260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.887666ms"
	I0807 18:42:18.927422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.49µs"
	I0807 18:42:26.946513       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tqv4l EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tqv4l\": the object has been modified; please apply your changes to the latest version and try again"
	I0807 18:42:26.948544       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e943e6d5-492a-4b17-b13e-1f19556376b7", APIVersion:"v1", ResourceVersion:"250", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tqv4l EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tqv4l": the object has been modified; please apply your changes to the latest version and try again
	I0807 18:42:27.058258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="292.696658ms"
	I0807 18:42:27.058382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.35µs"
	I0807 18:42:37.683385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.98383ms"
	I0807 18:42:37.683649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.116µs"
	I0807 18:43:04.172966       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-198246-m04"
	
	
	==> kube-controller-manager [a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5] <==
	I0807 18:40:56.133957       1 serving.go:380] Generated self-signed cert in-memory
	I0807 18:40:56.419739       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0807 18:40:56.419779       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:40:56.421777       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 18:40:56.421919       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0807 18:40:56.422476       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 18:40:56.422378       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0807 18:41:16.927071       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.196:8443/healthz\": dial tcp 192.168.39.196:8443: connect: connection refused"
	
	
	==> kube-proxy [1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8] <==
	I0807 18:41:36.575857       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 18:41:36.576181       1 server.go:872] "Version info" version="v1.30.3"
	I0807 18:41:36.576221       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:41:36.578389       1 config.go:192] "Starting service config controller"
	I0807 18:41:36.578436       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 18:41:36.578551       1 config.go:101] "Starting endpoint slice config controller"
	I0807 18:41:36.578571       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 18:41:36.579294       1 config.go:319] "Starting node config controller"
	I0807 18:41:36.579336       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0807 18:41:39.616952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:39.617301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:39.617690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:39.617478       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:39.617894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:39.617590       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0807 18:41:39.617973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:42.647346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:42.647524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:42.647634       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:42.647669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:42.647842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:42.647900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0807 18:41:45.279637       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 18:41:45.478687       1 shared_informer.go:320] Caches are synced for service config
	I0807 18:41:45.679416       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8] <==
	E0807 18:38:07.606371       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:10.678939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:10.679281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:10.679531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:10.679629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:10.679930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:10.679995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:16.823926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:16.824041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:16.824334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:16.824425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:16.824619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:16.824685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:26.039315       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:26.040083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:26.040368       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:26.040530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:29.110847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:29.111109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:44.471365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:44.471507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:44.471728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:44.471767       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:50.615780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:50.615956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56] <==
	W0807 18:39:04.616635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 18:39:04.616746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0807 18:39:04.720177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 18:39:04.720265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:39:04.899572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 18:39:04.899659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0807 18:39:05.052221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:39:05.052345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0807 18:39:05.344248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 18:39:05.344378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 18:39:05.409802       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0807 18:39:05.409852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0807 18:39:05.476009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:39:05.476053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:39:05.481275       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 18:39:05.481369       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 18:39:05.873604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0807 18:39:05.873714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0807 18:39:05.888981       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0807 18:39:05.889098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0807 18:39:10.670361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0807 18:39:10.670540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0807 18:39:11.563080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:39:11.563140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:39:11.664228       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c] <==
	W0807 18:41:33.375051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.196:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:33.375227       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.196:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:34.110708       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.196:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:34.110851       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.196:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:34.775939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.196:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:34.776016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.196:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:34.879403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.196:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:34.879562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.196:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:35.625373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.196:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:35.625570       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.196:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:35.867261       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.196:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:35.867392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.196:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.111878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.196:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.112054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.196:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.209372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.196:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.209435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.196:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.218105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.196:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.218162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.196:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.456926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.456994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.899602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.899685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:37.281783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:37.281838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	I0807 18:41:58.933040       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 18:41:39 ha-198246 kubelet[1372]: E0807 18:41:39.573909    1372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-198246?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Aug 07 18:41:39 ha-198246 kubelet[1372]: I0807 18:41:39.573983    1372 status_manager.go:853] "Failed to get status for pod" podUID="c60b0b92792ae1d5ba11a7a2e649f612" pod="kube-system/etcd-ha-198246" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-198246\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 07 18:41:42 ha-198246 kubelet[1372]: E0807 18:41:42.645989    1372 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-198246\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-198246?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 07 18:41:42 ha-198246 kubelet[1372]: W0807 18:41:42.645991    1372 reflector.go:547] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)ha-198246&resourceVersion=2037": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 07 18:41:42 ha-198246 kubelet[1372]: E0807 18:41:42.646391    1372 reflector.go:150] pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)ha-198246&resourceVersion=2037": dial tcp 192.168.39.254:8443: connect: no route to host
	Aug 07 18:41:42 ha-198246 kubelet[1372]: I0807 18:41:42.646525    1372 status_manager.go:853] "Failed to get status for pod" podUID="b12d62604f0b70faa552e6c44d8cd532" pod="kube-system/kube-controller-manager-ha-198246" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-198246\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 07 18:41:47 ha-198246 kubelet[1372]: I0807 18:41:47.739520    1372 scope.go:117] "RemoveContainer" containerID="0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a"
	Aug 07 18:41:47 ha-198246 kubelet[1372]: E0807 18:41:47.739705    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(88457253-9aa8-4bd7-974f-1b47b341d40c)\"" pod="kube-system/storage-provisioner" podUID="88457253-9aa8-4bd7-974f-1b47b341d40c"
	Aug 07 18:42:02 ha-198246 kubelet[1372]: I0807 18:42:02.739861    1372 scope.go:117] "RemoveContainer" containerID="0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a"
	Aug 07 18:42:02 ha-198246 kubelet[1372]: E0807 18:42:02.740532    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(88457253-9aa8-4bd7-974f-1b47b341d40c)\"" pod="kube-system/storage-provisioner" podUID="88457253-9aa8-4bd7-974f-1b47b341d40c"
	Aug 07 18:42:06 ha-198246 kubelet[1372]: E0807 18:42:06.763636    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:42:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:42:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:42:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:42:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:42:13 ha-198246 kubelet[1372]: I0807 18:42:13.739005    1372 scope.go:117] "RemoveContainer" containerID="0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a"
	Aug 07 18:42:14 ha-198246 kubelet[1372]: I0807 18:42:14.981581    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-chh26" podStartSLOduration=617.287881698 podStartE2EDuration="10m19.981537865s" podCreationTimestamp="2024-08-07 18:31:55 +0000 UTC" firstStartedPulling="2024-08-07 18:31:56.392612234 +0000 UTC m=+229.818597257" lastFinishedPulling="2024-08-07 18:31:59.086268404 +0000 UTC m=+232.512253424" observedRunningTime="2024-08-07 18:31:59.764202578 +0000 UTC m=+233.190187619" watchObservedRunningTime="2024-08-07 18:42:14.981537865 +0000 UTC m=+848.407522911"
	Aug 07 18:42:31 ha-198246 kubelet[1372]: I0807 18:42:31.739129    1372 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-198246" podUID="a230b27d-cbec-4a1a-a7e7-7192f3de3915"
	Aug 07 18:42:31 ha-198246 kubelet[1372]: I0807 18:42:31.761487    1372 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-198246"
	Aug 07 18:42:36 ha-198246 kubelet[1372]: I0807 18:42:36.763710    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-198246" podStartSLOduration=5.763681356 podStartE2EDuration="5.763681356s" podCreationTimestamp="2024-08-07 18:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-07 18:42:36.760068714 +0000 UTC m=+870.186053754" watchObservedRunningTime="2024-08-07 18:42:36.763681356 +0000 UTC m=+870.189666414"
	Aug 07 18:43:06 ha-198246 kubelet[1372]: E0807 18:43:06.761905    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:43:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:43:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:43:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:43:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 18:43:11.636274   52271 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19389-20864/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198246 -n ha-198246
helpers_test.go:261: (dbg) Run:  kubectl --context ha-198246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 stop -v=7 --alsologtostderr: exit status 82 (2m0.470380934s)

                                                
                                                
-- stdout --
	* Stopping node "ha-198246-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:43:31.424587   52683 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:43:31.424867   52683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:43:31.424876   52683 out.go:304] Setting ErrFile to fd 2...
	I0807 18:43:31.424880   52683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:43:31.425082   52683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:43:31.425288   52683 out.go:298] Setting JSON to false
	I0807 18:43:31.425357   52683 mustload.go:65] Loading cluster: ha-198246
	I0807 18:43:31.425703   52683 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:43:31.425782   52683 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:43:31.425952   52683 mustload.go:65] Loading cluster: ha-198246
	I0807 18:43:31.426077   52683 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:43:31.426104   52683 stop.go:39] StopHost: ha-198246-m04
	I0807 18:43:31.426434   52683 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:43:31.426483   52683 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:43:31.440723   52683 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I0807 18:43:31.441193   52683 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:43:31.441711   52683 main.go:141] libmachine: Using API Version  1
	I0807 18:43:31.441735   52683 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:43:31.442094   52683 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:43:31.444286   52683 out.go:177] * Stopping node "ha-198246-m04"  ...
	I0807 18:43:31.445507   52683 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0807 18:43:31.445532   52683 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:43:31.445761   52683 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0807 18:43:31.445787   52683 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:43:31.448747   52683 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:43:31.449139   52683 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:42:59 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:43:31.449177   52683 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:43:31.449287   52683 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:43:31.449453   52683 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:43:31.449633   52683 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:43:31.449797   52683 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	I0807 18:43:31.534730   52683 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0807 18:43:31.587550   52683 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0807 18:43:31.640033   52683 main.go:141] libmachine: Stopping "ha-198246-m04"...
	I0807 18:43:31.640082   52683 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:43:31.641677   52683 main.go:141] libmachine: (ha-198246-m04) Calling .Stop
	I0807 18:43:31.645395   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 0/120
	I0807 18:43:32.646659   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 1/120
	I0807 18:43:33.647984   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 2/120
	I0807 18:43:34.649685   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 3/120
	I0807 18:43:35.651110   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 4/120
	I0807 18:43:36.653353   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 5/120
	I0807 18:43:37.654836   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 6/120
	I0807 18:43:38.656346   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 7/120
	I0807 18:43:39.658909   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 8/120
	I0807 18:43:40.660518   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 9/120
	I0807 18:43:41.662755   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 10/120
	I0807 18:43:42.664440   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 11/120
	I0807 18:43:43.665833   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 12/120
	I0807 18:43:44.668022   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 13/120
	I0807 18:43:45.669471   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 14/120
	I0807 18:43:46.671320   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 15/120
	I0807 18:43:47.672773   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 16/120
	I0807 18:43:48.674086   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 17/120
	I0807 18:43:49.675434   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 18/120
	I0807 18:43:50.677164   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 19/120
	I0807 18:43:51.678646   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 20/120
	I0807 18:43:52.680268   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 21/120
	I0807 18:43:53.682070   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 22/120
	I0807 18:43:54.683641   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 23/120
	I0807 18:43:55.685758   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 24/120
	I0807 18:43:56.687664   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 25/120
	I0807 18:43:57.689369   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 26/120
	I0807 18:43:58.690792   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 27/120
	I0807 18:43:59.692110   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 28/120
	I0807 18:44:00.693442   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 29/120
	I0807 18:44:01.695412   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 30/120
	I0807 18:44:02.696911   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 31/120
	I0807 18:44:03.698959   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 32/120
	I0807 18:44:04.700315   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 33/120
	I0807 18:44:05.701850   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 34/120
	I0807 18:44:06.703485   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 35/120
	I0807 18:44:07.704849   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 36/120
	I0807 18:44:08.706679   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 37/120
	I0807 18:44:09.707989   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 38/120
	I0807 18:44:10.709369   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 39/120
	I0807 18:44:11.711502   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 40/120
	I0807 18:44:12.712880   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 41/120
	I0807 18:44:13.714651   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 42/120
	I0807 18:44:14.716276   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 43/120
	I0807 18:44:15.718416   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 44/120
	I0807 18:44:16.720538   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 45/120
	I0807 18:44:17.722644   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 46/120
	I0807 18:44:18.724108   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 47/120
	I0807 18:44:19.725760   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 48/120
	I0807 18:44:20.727117   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 49/120
	I0807 18:44:21.729171   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 50/120
	I0807 18:44:22.730478   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 51/120
	I0807 18:44:23.731726   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 52/120
	I0807 18:44:24.733542   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 53/120
	I0807 18:44:25.734968   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 54/120
	I0807 18:44:26.736349   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 55/120
	I0807 18:44:27.737626   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 56/120
	I0807 18:44:28.739739   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 57/120
	I0807 18:44:29.741340   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 58/120
	I0807 18:44:30.742743   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 59/120
	I0807 18:44:31.744598   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 60/120
	I0807 18:44:32.746716   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 61/120
	I0807 18:44:33.749033   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 62/120
	I0807 18:44:34.750231   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 63/120
	I0807 18:44:35.752013   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 64/120
	I0807 18:44:36.753452   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 65/120
	I0807 18:44:37.754785   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 66/120
	I0807 18:44:38.756275   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 67/120
	I0807 18:44:39.757692   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 68/120
	I0807 18:44:40.759111   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 69/120
	I0807 18:44:41.760544   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 70/120
	I0807 18:44:42.762106   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 71/120
	I0807 18:44:43.763763   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 72/120
	I0807 18:44:44.765187   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 73/120
	I0807 18:44:45.767168   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 74/120
	I0807 18:44:46.768699   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 75/120
	I0807 18:44:47.770932   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 76/120
	I0807 18:44:48.772375   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 77/120
	I0807 18:44:49.774947   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 78/120
	I0807 18:44:50.776696   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 79/120
	I0807 18:44:51.778749   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 80/120
	I0807 18:44:52.780792   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 81/120
	I0807 18:44:53.782627   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 82/120
	I0807 18:44:54.783967   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 83/120
	I0807 18:44:55.785121   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 84/120
	I0807 18:44:56.786660   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 85/120
	I0807 18:44:57.788664   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 86/120
	I0807 18:44:58.790779   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 87/120
	I0807 18:44:59.792091   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 88/120
	I0807 18:45:00.793581   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 89/120
	I0807 18:45:01.796080   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 90/120
	I0807 18:45:02.797381   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 91/120
	I0807 18:45:03.798823   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 92/120
	I0807 18:45:04.800190   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 93/120
	I0807 18:45:05.801533   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 94/120
	I0807 18:45:06.803896   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 95/120
	I0807 18:45:07.805677   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 96/120
	I0807 18:45:08.806877   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 97/120
	I0807 18:45:09.808278   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 98/120
	I0807 18:45:10.809554   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 99/120
	I0807 18:45:11.811654   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 100/120
	I0807 18:45:12.813221   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 101/120
	I0807 18:45:13.815243   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 102/120
	I0807 18:45:14.816703   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 103/120
	I0807 18:45:15.818131   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 104/120
	I0807 18:45:16.820108   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 105/120
	I0807 18:45:17.821511   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 106/120
	I0807 18:45:18.822906   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 107/120
	I0807 18:45:19.824248   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 108/120
	I0807 18:45:20.826211   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 109/120
	I0807 18:45:21.828112   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 110/120
	I0807 18:45:22.829951   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 111/120
	I0807 18:45:23.831503   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 112/120
	I0807 18:45:24.832857   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 113/120
	I0807 18:45:25.834478   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 114/120
	I0807 18:45:26.836191   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 115/120
	I0807 18:45:27.838271   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 116/120
	I0807 18:45:28.839618   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 117/120
	I0807 18:45:29.841079   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 118/120
	I0807 18:45:30.842734   52683 main.go:141] libmachine: (ha-198246-m04) Waiting for machine to stop 119/120
	I0807 18:45:31.844000   52683 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0807 18:45:31.844058   52683 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0807 18:45:31.845928   52683 out.go:177] 
	W0807 18:45:31.847220   52683 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0807 18:45:31.847239   52683 out.go:239] * 
	* 
	W0807 18:45:31.849572   52683 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 18:45:31.851855   52683 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-198246 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr: exit status 3 (18.938996004s)

                                                
                                                
-- stdout --
	ha-198246
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-198246-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:45:31.896903   53090 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:45:31.897159   53090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:45:31.897168   53090 out.go:304] Setting ErrFile to fd 2...
	I0807 18:45:31.897172   53090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:45:31.897347   53090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:45:31.897506   53090 out.go:298] Setting JSON to false
	I0807 18:45:31.897529   53090 mustload.go:65] Loading cluster: ha-198246
	I0807 18:45:31.897569   53090 notify.go:220] Checking for updates...
	I0807 18:45:31.898047   53090 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:45:31.898068   53090 status.go:255] checking status of ha-198246 ...
	I0807 18:45:31.898494   53090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:45:31.898553   53090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:45:31.918455   53090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34297
	I0807 18:45:31.918864   53090 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:45:31.919506   53090 main.go:141] libmachine: Using API Version  1
	I0807 18:45:31.919526   53090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:45:31.919923   53090 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:45:31.920150   53090 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:45:31.921764   53090 status.go:330] ha-198246 host status = "Running" (err=<nil>)
	I0807 18:45:31.921782   53090 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:45:31.922075   53090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:45:31.922114   53090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:45:31.936780   53090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0807 18:45:31.937231   53090 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:45:31.937732   53090 main.go:141] libmachine: Using API Version  1
	I0807 18:45:31.937757   53090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:45:31.938111   53090 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:45:31.938310   53090 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:45:31.941650   53090 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:45:31.942086   53090 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:45:31.942122   53090 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:45:31.942278   53090 host.go:66] Checking if "ha-198246" exists ...
	I0807 18:45:31.942601   53090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:45:31.942649   53090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:45:31.957430   53090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34799
	I0807 18:45:31.957917   53090 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:45:31.958390   53090 main.go:141] libmachine: Using API Version  1
	I0807 18:45:31.958412   53090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:45:31.958737   53090 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:45:31.958952   53090 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:45:31.959133   53090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:45:31.959166   53090 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:45:31.962011   53090 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:45:31.962554   53090 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:45:31.962593   53090 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:45:31.962674   53090 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:45:31.962832   53090 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:45:31.962981   53090 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:45:31.963120   53090 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:45:32.052429   53090 ssh_runner.go:195] Run: systemctl --version
	I0807 18:45:32.060544   53090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:45:32.078646   53090 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:45:32.078672   53090 api_server.go:166] Checking apiserver status ...
	I0807 18:45:32.078729   53090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:45:32.098928   53090 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5117/cgroup
	W0807 18:45:32.112486   53090 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5117/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:45:32.112535   53090 ssh_runner.go:195] Run: ls
	I0807 18:45:32.117568   53090 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:45:32.122105   53090 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:45:32.122125   53090 status.go:422] ha-198246 apiserver status = Running (err=<nil>)
	I0807 18:45:32.122134   53090 status.go:257] ha-198246 status: &{Name:ha-198246 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:45:32.122169   53090 status.go:255] checking status of ha-198246-m02 ...
	I0807 18:45:32.122521   53090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:45:32.122555   53090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:45:32.137345   53090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0807 18:45:32.137931   53090 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:45:32.138476   53090 main.go:141] libmachine: Using API Version  1
	I0807 18:45:32.138499   53090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:45:32.138794   53090 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:45:32.138950   53090 main.go:141] libmachine: (ha-198246-m02) Calling .GetState
	I0807 18:45:32.140557   53090 status.go:330] ha-198246-m02 host status = "Running" (err=<nil>)
	I0807 18:45:32.140573   53090 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:45:32.140836   53090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:45:32.140879   53090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:45:32.156708   53090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40079
	I0807 18:45:32.157111   53090 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:45:32.157524   53090 main.go:141] libmachine: Using API Version  1
	I0807 18:45:32.157546   53090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:45:32.157871   53090 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:45:32.158065   53090 main.go:141] libmachine: (ha-198246-m02) Calling .GetIP
	I0807 18:45:32.160768   53090 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:45:32.161205   53090 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:41:00 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:45:32.161232   53090 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:45:32.161359   53090 host.go:66] Checking if "ha-198246-m02" exists ...
	I0807 18:45:32.161757   53090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:45:32.161805   53090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:45:32.176018   53090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41121
	I0807 18:45:32.176448   53090 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:45:32.176935   53090 main.go:141] libmachine: Using API Version  1
	I0807 18:45:32.176954   53090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:45:32.177346   53090 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:45:32.177549   53090 main.go:141] libmachine: (ha-198246-m02) Calling .DriverName
	I0807 18:45:32.177725   53090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:45:32.177746   53090 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHHostname
	I0807 18:45:32.180762   53090 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:45:32.181232   53090 main.go:141] libmachine: (ha-198246-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:91:fc", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:41:00 +0000 UTC Type:0 Mac:52:54:00:c8:91:fc Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-198246-m02 Clientid:01:52:54:00:c8:91:fc}
	I0807 18:45:32.181258   53090 main.go:141] libmachine: (ha-198246-m02) DBG | domain ha-198246-m02 has defined IP address 192.168.39.251 and MAC address 52:54:00:c8:91:fc in network mk-ha-198246
	I0807 18:45:32.181423   53090 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHPort
	I0807 18:45:32.181615   53090 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHKeyPath
	I0807 18:45:32.181805   53090 main.go:141] libmachine: (ha-198246-m02) Calling .GetSSHUsername
	I0807 18:45:32.181947   53090 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m02/id_rsa Username:docker}
	I0807 18:45:32.269514   53090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 18:45:32.286699   53090 kubeconfig.go:125] found "ha-198246" server: "https://192.168.39.254:8443"
	I0807 18:45:32.286721   53090 api_server.go:166] Checking apiserver status ...
	I0807 18:45:32.286754   53090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 18:45:32.301652   53090 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1642/cgroup
	W0807 18:45:32.311128   53090 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1642/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 18:45:32.311183   53090 ssh_runner.go:195] Run: ls
	I0807 18:45:32.315348   53090 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0807 18:45:32.319554   53090 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0807 18:45:32.319577   53090 status.go:422] ha-198246-m02 apiserver status = Running (err=<nil>)
	I0807 18:45:32.319588   53090 status.go:257] ha-198246-m02 status: &{Name:ha-198246-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 18:45:32.319606   53090 status.go:255] checking status of ha-198246-m04 ...
	I0807 18:45:32.319920   53090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:45:32.319956   53090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:45:32.334367   53090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44039
	I0807 18:45:32.334927   53090 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:45:32.335439   53090 main.go:141] libmachine: Using API Version  1
	I0807 18:45:32.335458   53090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:45:32.335824   53090 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:45:32.336030   53090 main.go:141] libmachine: (ha-198246-m04) Calling .GetState
	I0807 18:45:32.337572   53090 status.go:330] ha-198246-m04 host status = "Running" (err=<nil>)
	I0807 18:45:32.337587   53090 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:45:32.337861   53090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:45:32.337893   53090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:45:32.352825   53090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45569
	I0807 18:45:32.353364   53090 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:45:32.353866   53090 main.go:141] libmachine: Using API Version  1
	I0807 18:45:32.353894   53090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:45:32.354193   53090 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:45:32.354452   53090 main.go:141] libmachine: (ha-198246-m04) Calling .GetIP
	I0807 18:45:32.357271   53090 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:45:32.357720   53090 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:42:59 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:45:32.357740   53090 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:45:32.357870   53090 host.go:66] Checking if "ha-198246-m04" exists ...
	I0807 18:45:32.358168   53090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:45:32.358201   53090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:45:32.372677   53090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I0807 18:45:32.373111   53090 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:45:32.373592   53090 main.go:141] libmachine: Using API Version  1
	I0807 18:45:32.373616   53090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:45:32.373911   53090 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:45:32.374081   53090 main.go:141] libmachine: (ha-198246-m04) Calling .DriverName
	I0807 18:45:32.374216   53090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 18:45:32.374241   53090 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHHostname
	I0807 18:45:32.376808   53090 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:45:32.377347   53090 main.go:141] libmachine: (ha-198246-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:13:d6", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:42:59 +0000 UTC Type:0 Mac:52:54:00:5b:13:d6 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:ha-198246-m04 Clientid:01:52:54:00:5b:13:d6}
	I0807 18:45:32.377369   53090 main.go:141] libmachine: (ha-198246-m04) DBG | domain ha-198246-m04 has defined IP address 192.168.39.150 and MAC address 52:54:00:5b:13:d6 in network mk-ha-198246
	I0807 18:45:32.377479   53090 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHPort
	I0807 18:45:32.377666   53090 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHKeyPath
	I0807 18:45:32.377816   53090 main.go:141] libmachine: (ha-198246-m04) Calling .GetSSHUsername
	I0807 18:45:32.377925   53090 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246-m04/id_rsa Username:docker}
	W0807 18:45:50.792419   53090 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.150:22: connect: no route to host
	W0807 18:45:50.792498   53090 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.150:22: connect: no route to host
	E0807 18:45:50.792511   53090 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.150:22: connect: no route to host
	I0807 18:45:50.792520   53090 status.go:257] ha-198246-m04 status: &{Name:ha-198246-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0807 18:45:50.792538   53090 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.150:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-198246 -n ha-198246
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-198246 logs -n 25: (1.742500372s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-198246 ssh -n ha-198246-m02 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04:/home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m04 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp testdata/cp-test.txt                                                | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4028937378/001/cp-test_ha-198246-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246:/home/docker/cp-test_ha-198246-m04_ha-198246.txt                       |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246 sudo cat                                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246.txt                                 |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m02:/home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m02 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m03:/home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n                                                                 | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | ha-198246-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-198246 ssh -n ha-198246-m03 sudo cat                                          | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC | 07 Aug 24 18:33 UTC |
	|         | /home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-198246 node stop m02 -v=7                                                     | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-198246 node start m02 -v=7                                                    | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:36 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-198246 -v=7                                                           | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-198246 -v=7                                                                | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-198246 --wait=true -v=7                                                    | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:39 UTC | 07 Aug 24 18:43 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-198246                                                                | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:43 UTC |                     |
	| node    | ha-198246 node delete m03 -v=7                                                   | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:43 UTC | 07 Aug 24 18:43 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-198246 stop -v=7                                                              | ha-198246 | jenkins | v1.33.1 | 07 Aug 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 18:39:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 18:39:10.703961   50940 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:39:10.704063   50940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:39:10.704074   50940 out.go:304] Setting ErrFile to fd 2...
	I0807 18:39:10.704080   50940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:39:10.704321   50940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:39:10.704903   50940 out.go:298] Setting JSON to false
	I0807 18:39:10.705810   50940 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8497,"bootTime":1723047454,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 18:39:10.705868   50940 start.go:139] virtualization: kvm guest
	I0807 18:39:10.708186   50940 out.go:177] * [ha-198246] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 18:39:10.709520   50940 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:39:10.709538   50940 notify.go:220] Checking for updates...
	I0807 18:39:10.712003   50940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:39:10.713396   50940 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:39:10.714731   50940 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:39:10.715948   50940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 18:39:10.717225   50940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:39:10.718787   50940 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:39:10.718904   50940 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:39:10.719278   50940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:39:10.719351   50940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:39:10.733872   50940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42491
	I0807 18:39:10.734299   50940 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:39:10.734849   50940 main.go:141] libmachine: Using API Version  1
	I0807 18:39:10.734868   50940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:39:10.735149   50940 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:39:10.735301   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:39:10.771445   50940 out.go:177] * Using the kvm2 driver based on existing profile
	I0807 18:39:10.772781   50940 start.go:297] selected driver: kvm2
	I0807 18:39:10.772800   50940 start.go:901] validating driver "kvm2" against &{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.150 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:39:10.772957   50940 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:39:10.773299   50940 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:39:10.773371   50940 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 18:39:10.789261   50940 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 18:39:10.789911   50940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 18:39:10.789973   50940 cni.go:84] Creating CNI manager for ""
	I0807 18:39:10.789984   50940 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0807 18:39:10.790037   50940 start.go:340] cluster config:
	{Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.150 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:39:10.790185   50940 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 18:39:10.792195   50940 out.go:177] * Starting "ha-198246" primary control-plane node in "ha-198246" cluster
	I0807 18:39:10.793566   50940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:39:10.793603   50940 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 18:39:10.793610   50940 cache.go:56] Caching tarball of preloaded images
	I0807 18:39:10.793702   50940 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 18:39:10.793712   50940 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 18:39:10.793820   50940 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/config.json ...
	I0807 18:39:10.794024   50940 start.go:360] acquireMachinesLock for ha-198246: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 18:39:10.794065   50940 start.go:364] duration metric: took 22.799µs to acquireMachinesLock for "ha-198246"
	I0807 18:39:10.794079   50940 start.go:96] Skipping create...Using existing machine configuration
	I0807 18:39:10.794090   50940 fix.go:54] fixHost starting: 
	I0807 18:39:10.794381   50940 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:39:10.794425   50940 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:39:10.809066   50940 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0807 18:39:10.809462   50940 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:39:10.809959   50940 main.go:141] libmachine: Using API Version  1
	I0807 18:39:10.809986   50940 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:39:10.810308   50940 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:39:10.810495   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:39:10.810681   50940 main.go:141] libmachine: (ha-198246) Calling .GetState
	I0807 18:39:10.812239   50940 fix.go:112] recreateIfNeeded on ha-198246: state=Running err=<nil>
	W0807 18:39:10.812270   50940 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 18:39:10.814145   50940 out.go:177] * Updating the running kvm2 "ha-198246" VM ...
	I0807 18:39:10.815433   50940 machine.go:94] provisionDockerMachine start ...
	I0807 18:39:10.815451   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:39:10.815630   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:10.817810   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:10.818187   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:10.818213   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:10.818335   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:10.818513   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:10.818654   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:10.818749   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:10.818901   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:39:10.819100   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:39:10.819115   50940 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 18:39:10.925855   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246
	
	I0807 18:39:10.925879   50940 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:39:10.926078   50940 buildroot.go:166] provisioning hostname "ha-198246"
	I0807 18:39:10.926140   50940 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:39:10.926308   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:10.928840   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:10.929204   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:10.929237   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:10.929390   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:10.929562   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:10.929724   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:10.929880   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:10.930029   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:39:10.930205   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:39:10.930217   50940 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-198246 && echo "ha-198246" | sudo tee /etc/hostname
	I0807 18:39:11.053216   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-198246
	
	I0807 18:39:11.053240   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:11.055783   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.056163   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.056191   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.056375   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:11.056558   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.056730   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.056872   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:11.057063   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:39:11.057246   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:39:11.057262   50940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-198246' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-198246/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-198246' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 18:39:11.166536   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 18:39:11.166570   50940 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 18:39:11.166612   50940 buildroot.go:174] setting up certificates
	I0807 18:39:11.166625   50940 provision.go:84] configureAuth start
	I0807 18:39:11.166654   50940 main.go:141] libmachine: (ha-198246) Calling .GetMachineName
	I0807 18:39:11.166901   50940 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:39:11.169619   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.169944   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.169968   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.170103   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:11.171922   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.172247   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.172274   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.172424   50940 provision.go:143] copyHostCerts
	I0807 18:39:11.172454   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:39:11.172522   50940 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 18:39:11.172534   50940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 18:39:11.172630   50940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 18:39:11.172747   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:39:11.172773   50940 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 18:39:11.172782   50940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 18:39:11.172826   50940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 18:39:11.172918   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:39:11.172943   50940 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 18:39:11.172951   50940 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 18:39:11.172980   50940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 18:39:11.173031   50940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.ha-198246 san=[127.0.0.1 192.168.39.196 ha-198246 localhost minikube]
	I0807 18:39:11.343149   50940 provision.go:177] copyRemoteCerts
	I0807 18:39:11.343209   50940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 18:39:11.343232   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:11.345780   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.346082   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.346106   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.346304   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:11.346476   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.346624   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:11.346732   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:39:11.433507   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 18:39:11.433590   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0807 18:39:11.466276   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 18:39:11.466358   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 18:39:11.502337   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 18:39:11.502412   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 18:39:11.534170   50940 provision.go:87] duration metric: took 367.53308ms to configureAuth
	I0807 18:39:11.534194   50940 buildroot.go:189] setting minikube options for container-runtime
	I0807 18:39:11.534425   50940 config.go:182] Loaded profile config "ha-198246": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:39:11.534509   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:39:11.537345   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.537777   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:39:11.537807   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:39:11.537990   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:39:11.538146   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.538290   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:39:11.538520   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:39:11.538671   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:39:11.538819   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:39:11.538832   50940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 18:40:42.400613   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 18:40:42.400643   50940 machine.go:97] duration metric: took 1m31.585196452s to provisionDockerMachine
	I0807 18:40:42.400658   50940 start.go:293] postStartSetup for "ha-198246" (driver="kvm2")
	I0807 18:40:42.400671   50940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 18:40:42.400693   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.401072   50940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 18:40:42.401099   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.404010   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.404477   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.404504   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.404643   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.404845   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.405021   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.405173   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:40:42.490224   50940 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 18:40:42.494616   50940 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 18:40:42.494641   50940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 18:40:42.494695   50940 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 18:40:42.494777   50940 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 18:40:42.494787   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 18:40:42.494880   50940 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 18:40:42.504515   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:40:42.528517   50940 start.go:296] duration metric: took 127.843726ms for postStartSetup
	I0807 18:40:42.528575   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.528885   50940 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0807 18:40:42.528916   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.531653   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.532011   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.532033   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.532169   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.532357   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.532511   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.532684   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	W0807 18:40:42.615140   50940 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0807 18:40:42.615173   50940 fix.go:56] duration metric: took 1m31.821083908s for fixHost
	I0807 18:40:42.615216   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.617521   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.617867   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.617897   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.618041   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.618255   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.618460   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.618620   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.618763   50940 main.go:141] libmachine: Using SSH client type: native
	I0807 18:40:42.618954   50940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.196 22 <nil> <nil>}
	I0807 18:40:42.618968   50940 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 18:40:42.720957   50940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723056042.683925571
	
	I0807 18:40:42.720977   50940 fix.go:216] guest clock: 1723056042.683925571
	I0807 18:40:42.720984   50940 fix.go:229] Guest: 2024-08-07 18:40:42.683925571 +0000 UTC Remote: 2024-08-07 18:40:42.615179881 +0000 UTC m=+91.947737851 (delta=68.74569ms)
	I0807 18:40:42.721007   50940 fix.go:200] guest clock delta is within tolerance: 68.74569ms
	I0807 18:40:42.721012   50940 start.go:83] releasing machines lock for "ha-198246", held for 1m31.926938457s
	I0807 18:40:42.721032   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.721329   50940 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:40:42.723792   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.724195   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.724240   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.724377   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.724857   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.725008   50940 main.go:141] libmachine: (ha-198246) Calling .DriverName
	I0807 18:40:42.725089   50940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 18:40:42.725128   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.725228   50940 ssh_runner.go:195] Run: cat /version.json
	I0807 18:40:42.725251   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHHostname
	I0807 18:40:42.727728   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.727874   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.728078   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.728105   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.728353   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.728389   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:42.728425   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:42.728514   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.728576   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHPort
	I0807 18:40:42.728654   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.728712   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHKeyPath
	I0807 18:40:42.728759   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:40:42.728827   50940 main.go:141] libmachine: (ha-198246) Calling .GetSSHUsername
	I0807 18:40:42.728971   50940 sshutil.go:53] new ssh client: &{IP:192.168.39.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/ha-198246/id_rsa Username:docker}
	I0807 18:40:42.831499   50940 ssh_runner.go:195] Run: systemctl --version
	I0807 18:40:42.838625   50940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 18:40:43.001017   50940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 18:40:43.011761   50940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 18:40:43.011846   50940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 18:40:43.021790   50940 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 18:40:43.021809   50940 start.go:495] detecting cgroup driver to use...
	I0807 18:40:43.021870   50940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 18:40:43.038892   50940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 18:40:43.052946   50940 docker.go:217] disabling cri-docker service (if available) ...
	I0807 18:40:43.053011   50940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 18:40:43.067629   50940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 18:40:43.082931   50940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 18:40:43.245782   50940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 18:40:43.399464   50940 docker.go:233] disabling docker service ...
	I0807 18:40:43.399546   50940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 18:40:43.417233   50940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 18:40:43.431474   50940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 18:40:43.579777   50940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 18:40:43.746564   50940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 18:40:43.761155   50940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 18:40:43.780543   50940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 18:40:43.780608   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.791780   50940 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 18:40:43.791856   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.802772   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.813558   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.824211   50940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 18:40:43.835548   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.847533   50940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.859249   50940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 18:40:43.870454   50940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 18:40:43.880638   50940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 18:40:43.890756   50940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:40:44.038924   50940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 18:40:47.981542   50940 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.942581083s)
	I0807 18:40:47.981573   50940 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 18:40:47.981627   50940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 18:40:47.989195   50940 start.go:563] Will wait 60s for crictl version
	I0807 18:40:47.989269   50940 ssh_runner.go:195] Run: which crictl
	I0807 18:40:47.993258   50940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 18:40:48.031869   50940 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 18:40:48.031936   50940 ssh_runner.go:195] Run: crio --version
	I0807 18:40:48.060771   50940 ssh_runner.go:195] Run: crio --version
	I0807 18:40:48.094518   50940 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 18:40:48.095835   50940 main.go:141] libmachine: (ha-198246) Calling .GetIP
	I0807 18:40:48.098609   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:48.098986   50940 main.go:141] libmachine: (ha-198246) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:88:98", ip: ""} in network mk-ha-198246: {Iface:virbr1 ExpiryTime:2024-08-07 19:27:36 +0000 UTC Type:0 Mac:52:54:00:b0:88:98 Iaid: IPaddr:192.168.39.196 Prefix:24 Hostname:ha-198246 Clientid:01:52:54:00:b0:88:98}
	I0807 18:40:48.099017   50940 main.go:141] libmachine: (ha-198246) DBG | domain ha-198246 has defined IP address 192.168.39.196 and MAC address 52:54:00:b0:88:98 in network mk-ha-198246
	I0807 18:40:48.099216   50940 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 18:40:48.104569   50940 kubeadm.go:883] updating cluster {Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.150 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 18:40:48.104704   50940 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 18:40:48.104758   50940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:40:48.150334   50940 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 18:40:48.150361   50940 crio.go:433] Images already preloaded, skipping extraction
	I0807 18:40:48.150432   50940 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 18:40:48.194374   50940 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 18:40:48.194398   50940 cache_images.go:84] Images are preloaded, skipping loading
	I0807 18:40:48.194410   50940 kubeadm.go:934] updating node { 192.168.39.196 8443 v1.30.3 crio true true} ...
	I0807 18:40:48.194561   50940 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-198246 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.196
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 18:40:48.194648   50940 ssh_runner.go:195] Run: crio config
	I0807 18:40:48.247930   50940 cni.go:84] Creating CNI manager for ""
	I0807 18:40:48.247947   50940 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0807 18:40:48.247965   50940 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 18:40:48.247995   50940 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.196 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-198246 NodeName:ha-198246 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.196"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.196 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 18:40:48.248142   50940 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.196
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-198246"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.196
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.196"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 18:40:48.248170   50940 kube-vip.go:115] generating kube-vip config ...
	I0807 18:40:48.248231   50940 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0807 18:40:48.260017   50940 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0807 18:40:48.260127   50940 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0807 18:40:48.260196   50940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 18:40:48.270430   50940 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 18:40:48.270492   50940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0807 18:40:48.280714   50940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0807 18:40:48.298345   50940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 18:40:48.317202   50940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0807 18:40:48.335643   50940 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0807 18:40:48.353957   50940 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0807 18:40:48.359383   50940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 18:40:48.511387   50940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 18:40:48.526457   50940 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246 for IP: 192.168.39.196
	I0807 18:40:48.526483   50940 certs.go:194] generating shared ca certs ...
	I0807 18:40:48.526498   50940 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:40:48.526666   50940 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 18:40:48.526718   50940 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 18:40:48.526729   50940 certs.go:256] generating profile certs ...
	I0807 18:40:48.526822   50940 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/client.key
	I0807 18:40:48.526874   50940 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.50faae22
	I0807 18:40:48.526908   50940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.50faae22 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.196 192.168.39.251 192.168.39.227 192.168.39.254]
	I0807 18:40:48.653522   50940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.50faae22 ...
	I0807 18:40:48.653551   50940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.50faae22: {Name:mk0466195f8efb396bd8881926e4f02164fcccd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:40:48.653717   50940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.50faae22 ...
	I0807 18:40:48.653728   50940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.50faae22: {Name:mk40794fd88475757a06d369c33f0c55f282e3c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 18:40:48.653794   50940 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt.50faae22 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt
	I0807 18:40:48.653953   50940 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key.50faae22 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key
	I0807 18:40:48.654082   50940 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key
	I0807 18:40:48.654096   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 18:40:48.654109   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 18:40:48.654122   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 18:40:48.654133   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 18:40:48.654151   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 18:40:48.654160   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 18:40:48.654177   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 18:40:48.654188   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 18:40:48.654243   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 18:40:48.654272   50940 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 18:40:48.654278   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 18:40:48.654297   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 18:40:48.654315   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 18:40:48.654334   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 18:40:48.654371   50940 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 18:40:48.654395   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 18:40:48.654409   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:40:48.654420   50940 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 18:40:48.654987   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 18:40:48.682808   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 18:40:48.709144   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 18:40:48.734619   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 18:40:48.759348   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0807 18:40:48.784431   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 18:40:48.807829   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 18:40:48.832116   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/ha-198246/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 18:40:48.855849   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 18:40:48.879869   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 18:40:48.904829   50940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 18:40:48.929080   50940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 18:40:48.946022   50940 ssh_runner.go:195] Run: openssl version
	I0807 18:40:48.952109   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 18:40:48.963428   50940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 18:40:48.967946   50940 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 18:40:48.967994   50940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 18:40:48.973699   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 18:40:48.984349   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 18:40:48.996437   50940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:40:49.001131   50940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:40:49.001192   50940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 18:40:49.006999   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 18:40:49.017011   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 18:40:49.028071   50940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 18:40:49.032454   50940 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 18:40:49.032493   50940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 18:40:49.038275   50940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 18:40:49.048034   50940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 18:40:49.052709   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 18:40:49.058418   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 18:40:49.064004   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 18:40:49.069490   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 18:40:49.075292   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 18:40:49.081431   50940 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 18:40:49.087223   50940 kubeadm.go:392] StartCluster: {Name:ha-198246 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-198246 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.196 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.251 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.227 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.150 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:40:49.087330   50940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 18:40:49.087373   50940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 18:40:49.126348   50940 cri.go:89] found id: "ed4c5b7171a2e8de6e5c1692ca76f0a6cfd914813c567f16ac99ae2bc9e3bb6c"
	I0807 18:40:49.126369   50940 cri.go:89] found id: "b50bfdb91d10f8e89577e5d8b828877a309b9d44954f8e2e68d0522e801195dd"
	I0807 18:40:49.126374   50940 cri.go:89] found id: "0d5e41e989cec274969ba0eb8704ee50e0e5fe8adcfb6c56802de78ff130e1f1"
	I0807 18:40:49.126379   50940 cri.go:89] found id: "806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab"
	I0807 18:40:49.126383   50940 cri.go:89] found id: "3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7"
	I0807 18:40:49.126387   50940 cri.go:89] found id: "93fcff9b17b4b2366750c04f15288dda856a885fa1e95d4510a83b2b14b855a9"
	I0807 18:40:49.126390   50940 cri.go:89] found id: "5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554"
	I0807 18:40:49.126393   50940 cri.go:89] found id: "c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8"
	I0807 18:40:49.126396   50940 cri.go:89] found id: "305290711d5443ffae9e64678e692b52bbffed39cc06b059026f167d97c5e98d"
	I0807 18:40:49.126404   50940 cri.go:89] found id: "4902df4367b62015a5a5b09ee0190709490a8b746eca969190e50981691ce473"
	I0807 18:40:49.126412   50940 cri.go:89] found id: "2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56"
	I0807 18:40:49.126416   50940 cri.go:89] found id: "981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5"
	I0807 18:40:49.126420   50940 cri.go:89] found id: "6c84edcc5a98f1ba6f54c818e3063b8d5804d1a9de0705cd8ac38826104fef36"
	I0807 18:40:49.126424   50940 cri.go:89] found id: ""
	I0807 18:40:49.126469   50940 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.393563585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723056351393525664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a373dab8-64a6-4572-bdc3-bf0ebd5dc81f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.394370226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8eb7a35b-1dfa-4e34-9def-540373ce452f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.394428250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8eb7a35b-1dfa-4e34-9def-540373ce452f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.394947443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:009d486f82ea09a17ebb956c9c6ca314f1f09fe766880c724c94eee5ed5ffed2,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723056133751598650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723056097750099102,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723056092756499315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6cd08615618bd421596f6704986267a03b6696730326d0f074ea53c6defb67,PodSandboxId:5598e77b3f2c98a5310ffd7a165baf49471b49b26d94d5397ff412b61aa28b05,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723056088028307174,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723056087750662099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f540fc3d24fc8f24e10ddae759919e3a36c0baac2084537558d55dceebb3b76,PodSandboxId:d4e80fa25c9af7ef7f9c9295e77fd2a2d64cca566b6decb508355c6e1eb48a1f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723056068972327525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362cdc9ecf03b90e08cef0c047f19044,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8,PodSandboxId:b016288ef11234d8583ea6583176fb4c980dbf49174a7180a5a716e0ae08c65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723056054697353163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f,PodSandboxId:a1d7d3fd1da9859c4278323824cdcdcba51679e18b2f77294ec98551b82967b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054995536785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453,PodSandboxId:5ac81bf00a7a3ecace9394a3c9e8fe7d15d5ef9a8dd649175bc77f8bbd10d87d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723056054889341435,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132,PodSandboxId:a833ec31c33bb629b83ddeca118e07e39c7927c311d69a90df4f5fe625a43aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054794120846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723056054750542424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c,PodSandboxId:45b19adfcff0198c46fdf30fbf9abe633afd8cffc4810c959d0b299a53f41c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723056054633792484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723056054556133959,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548,PodSandboxId:2667de827b56002939350a63d286aa36384dce92ca959f827a81fc71ca8faba3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723056054564748960,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Anno
tations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723055519101632485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annota
tions:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319342523480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kuber
netes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319067104704,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723055306768392881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723055302363401299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723055280563943121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723055280588857214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8eb7a35b-1dfa-4e34-9def-540373ce452f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.448517873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=335240f9-beec-4c70-bf0a-7c71626012ee name=/runtime.v1.RuntimeService/Version
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.448794592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=335240f9-beec-4c70-bf0a-7c71626012ee name=/runtime.v1.RuntimeService/Version
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.451304762Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aca86bc3-7363-4790-a2f9-9554dabf557e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.451933994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723056351451869936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aca86bc3-7363-4790-a2f9-9554dabf557e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.452986295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f761a22d-73cf-46e4-bfff-85a694e7d50f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.453351672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f761a22d-73cf-46e4-bfff-85a694e7d50f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.453896293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:009d486f82ea09a17ebb956c9c6ca314f1f09fe766880c724c94eee5ed5ffed2,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723056133751598650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723056097750099102,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723056092756499315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6cd08615618bd421596f6704986267a03b6696730326d0f074ea53c6defb67,PodSandboxId:5598e77b3f2c98a5310ffd7a165baf49471b49b26d94d5397ff412b61aa28b05,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723056088028307174,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723056087750662099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f540fc3d24fc8f24e10ddae759919e3a36c0baac2084537558d55dceebb3b76,PodSandboxId:d4e80fa25c9af7ef7f9c9295e77fd2a2d64cca566b6decb508355c6e1eb48a1f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723056068972327525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362cdc9ecf03b90e08cef0c047f19044,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8,PodSandboxId:b016288ef11234d8583ea6583176fb4c980dbf49174a7180a5a716e0ae08c65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723056054697353163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f,PodSandboxId:a1d7d3fd1da9859c4278323824cdcdcba51679e18b2f77294ec98551b82967b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054995536785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453,PodSandboxId:5ac81bf00a7a3ecace9394a3c9e8fe7d15d5ef9a8dd649175bc77f8bbd10d87d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723056054889341435,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132,PodSandboxId:a833ec31c33bb629b83ddeca118e07e39c7927c311d69a90df4f5fe625a43aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054794120846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723056054750542424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c,PodSandboxId:45b19adfcff0198c46fdf30fbf9abe633afd8cffc4810c959d0b299a53f41c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723056054633792484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723056054556133959,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548,PodSandboxId:2667de827b56002939350a63d286aa36384dce92ca959f827a81fc71ca8faba3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723056054564748960,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Anno
tations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723055519101632485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annota
tions:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319342523480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kuber
netes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319067104704,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723055306768392881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723055302363401299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723055280563943121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723055280588857214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f761a22d-73cf-46e4-bfff-85a694e7d50f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.501083811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac6621d3-411c-429d-99fc-4e8ce6a9fa7f name=/runtime.v1.RuntimeService/Version
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.501167568Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac6621d3-411c-429d-99fc-4e8ce6a9fa7f name=/runtime.v1.RuntimeService/Version
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.502525456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09224e95-c7c9-407a-9b9b-50b164d31aa3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.502983544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723056351502963073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09224e95-c7c9-407a-9b9b-50b164d31aa3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.503587096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99aab097-94e9-424c-bb6c-c5ac94d65632 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.503661526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99aab097-94e9-424c-bb6c-c5ac94d65632 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.504165079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:009d486f82ea09a17ebb956c9c6ca314f1f09fe766880c724c94eee5ed5ffed2,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723056133751598650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723056097750099102,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723056092756499315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6cd08615618bd421596f6704986267a03b6696730326d0f074ea53c6defb67,PodSandboxId:5598e77b3f2c98a5310ffd7a165baf49471b49b26d94d5397ff412b61aa28b05,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723056088028307174,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723056087750662099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f540fc3d24fc8f24e10ddae759919e3a36c0baac2084537558d55dceebb3b76,PodSandboxId:d4e80fa25c9af7ef7f9c9295e77fd2a2d64cca566b6decb508355c6e1eb48a1f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723056068972327525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362cdc9ecf03b90e08cef0c047f19044,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8,PodSandboxId:b016288ef11234d8583ea6583176fb4c980dbf49174a7180a5a716e0ae08c65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723056054697353163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f,PodSandboxId:a1d7d3fd1da9859c4278323824cdcdcba51679e18b2f77294ec98551b82967b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054995536785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453,PodSandboxId:5ac81bf00a7a3ecace9394a3c9e8fe7d15d5ef9a8dd649175bc77f8bbd10d87d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723056054889341435,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132,PodSandboxId:a833ec31c33bb629b83ddeca118e07e39c7927c311d69a90df4f5fe625a43aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054794120846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723056054750542424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c,PodSandboxId:45b19adfcff0198c46fdf30fbf9abe633afd8cffc4810c959d0b299a53f41c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723056054633792484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723056054556133959,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548,PodSandboxId:2667de827b56002939350a63d286aa36384dce92ca959f827a81fc71ca8faba3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723056054564748960,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Anno
tations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723055519101632485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annota
tions:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319342523480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kuber
netes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319067104704,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723055306768392881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723055302363401299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723055280563943121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723055280588857214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99aab097-94e9-424c-bb6c-c5ac94d65632 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.551493722Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09f1c248-4db9-4871-85e1-364a4ea83b65 name=/runtime.v1.RuntimeService/Version
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.551757465Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09f1c248-4db9-4871-85e1-364a4ea83b65 name=/runtime.v1.RuntimeService/Version
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.553412738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b97bc4c1-336b-47f7-99e9-760907d22c1e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.554254440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723056351554227858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b97bc4c1-336b-47f7-99e9-760907d22c1e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.554720207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36dc5320-b4e1-4175-a217-8914ea98ff40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.554810023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36dc5320-b4e1-4175-a217-8914ea98ff40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 18:45:51 ha-198246 crio[3742]: time="2024-08-07 18:45:51.556292421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:009d486f82ea09a17ebb956c9c6ca314f1f09fe766880c724c94eee5ed5ffed2,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723056133751598650,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723056097750099102,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723056092756499315,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac6cd08615618bd421596f6704986267a03b6696730326d0f074ea53c6defb67,PodSandboxId:5598e77b3f2c98a5310ffd7a165baf49471b49b26d94d5397ff412b61aa28b05,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723056088028307174,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annotations:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a,PodSandboxId:6fc362f9e3c6e82f9469a6dd7e4cde3dd3ce6a00ec520cd1af397df843312820,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723056087750662099,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88457253-9aa8-4bd7-974f-1b47b341d40c,},Annotations:map[string]string{io.kubernetes.container.hash: c688b40c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f540fc3d24fc8f24e10ddae759919e3a36c0baac2084537558d55dceebb3b76,PodSandboxId:d4e80fa25c9af7ef7f9c9295e77fd2a2d64cca566b6decb508355c6e1eb48a1f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723056068972327525,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 362cdc9ecf03b90e08cef0c047f19044,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8,PodSandboxId:b016288ef11234d8583ea6583176fb4c980dbf49174a7180a5a716e0ae08c65f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723056054697353163,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f,PodSandboxId:a1d7d3fd1da9859c4278323824cdcdcba51679e18b2f77294ec98551b82967b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054995536785,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453,PodSandboxId:5ac81bf00a7a3ecace9394a3c9e8fe7d15d5ef9a8dd649175bc77f8bbd10d87d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723056054889341435,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132,PodSandboxId:a833ec31c33bb629b83ddeca118e07e39c7927c311d69a90df4f5fe625a43aa6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723056054794120846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kubernetes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5,PodSandboxId:384a81ba0d97c0e7ad6b8e0c99f2957d4b0a50cb6b97befa98772b8314e6a590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723056054750542424,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-198246,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: b12d62604f0b70faa552e6c44d8cd532,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c,PodSandboxId:45b19adfcff0198c46fdf30fbf9abe633afd8cffc4810c959d0b299a53f41c87,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723056054633792484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d,PodSandboxId:60563652ff3ff40782f019c761f2a2361b4849825e041b993739c0cd26c1d821,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723056054556133959,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b2b91906fc54e8232161e687fc4a9af5,},Annotations:map[string]string{io.kubernetes.container.hash: 4422576b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548,PodSandboxId:2667de827b56002939350a63d286aa36384dce92ca959f827a81fc71ca8faba3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723056054564748960,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Anno
tations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80335e9819afda5a240bdeaa75a8e44cfe48c8dbafa5f599d32606e0a6b453dc,PodSandboxId:4d0990efdcee83b764f38e56ae479be7f443d164067cefa10057f1576168f7c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723055519101632485,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-chh26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 42848aea-5e18-4f5c-b59d-f615d5128a74,},Annota
tions:map[string]string{io.kubernetes.container.hash: a6ef02f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab,PodSandboxId:a5394b2f1434ba21f4f4773555d63d3d4f295aff760fc79e94c5c175b4c8af4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319342523480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-w6w6g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 143456ef-ffd1-4d42-b9d0-6b778094eca5,},Annotations:map[string]string{io.kuber
netes.container.hash: 6be15b7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7,PodSandboxId:b57adade6ea152287caefc73242a7e723cff76836de4a80242c03abbb035bb13,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723055319067104704,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rbnrx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96fa387b-f93b-40df-9ed6-78834f3d02df,},Annotations:map[string]string{io.kubernetes.container.hash: 727b5a83,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554,PodSandboxId:bd5d340b4a58434695e62b4ffc8947cc9fe10963c7224febd850e872801a5ed1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723055306768392881,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-sgl8v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 574aa453-48ef-44ff-b10a-13142fc8cf7f,},Annotations:map[string]string{io.kubernetes.container.hash: f4a4ed57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8,PodSandboxId:4aec116af531d8547d5001b805d7728adf6a1402d2f9fb4b9776f15011e8490d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723055302363401299,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4l79v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 649e12b4-4e77-48a9-af9c-691694c4ec99,},Annotations:map[string]string{io.kubernetes.container.hash: 6ac1dec9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5,PodSandboxId:0e8285057cc0561c225b97a8688e2163325f9b61a96754f277a1b02818a5ef56,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f0627
88eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723055280563943121,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c60b0b92792ae1d5ba11a7a2e649f612,},Annotations:map[string]string{io.kubernetes.container.hash: 51cc6761,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56,PodSandboxId:7c56ff7ba09a0f2f1e24d97436a3c0bc5704d6f7f5f3d60c08c9f3cb424a6107,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
,State:CONTAINER_EXITED,CreatedAt:1723055280588857214,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-198246,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b90546fb511b52cb0b98695e572bae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36dc5320-b4e1-4175-a217-8914ea98ff40 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	009d486f82ea0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   6fc362f9e3c6e       storage-provisioner
	c98757fe8dd8c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   60563652ff3ff       kube-apiserver-ha-198246
	52694c1332778       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   384a81ba0d97c       kube-controller-manager-ha-198246
	ac6cd08615618       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   5598e77b3f2c9       busybox-fc5497c4f-chh26
	0336639d7a74d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   6fc362f9e3c6e       storage-provisioner
	9f540fc3d24fc       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   d4e80fa25c9af       kube-vip-ha-198246
	cf1befd19e1e6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   a1d7d3fd1da98       coredns-7db6d8ff4d-rbnrx
	d7cbe0ad607e5       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      4 minutes ago       Running             kindnet-cni               1                   5ac81bf00a7a3       kindnet-sgl8v
	f03c4cb552619       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   a833ec31c33bb       coredns-7db6d8ff4d-w6w6g
	a9e99c8b34ca1       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Exited              kube-controller-manager   1                   384a81ba0d97c       kube-controller-manager-ha-198246
	1ceccc741c65b       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   b016288ef1123       kube-proxy-4l79v
	c570124d66270       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   45b19adfcff01       kube-scheduler-ha-198246
	3b11723f44266       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   2667de827b560       etcd-ha-198246
	bef4b4746f9f5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Exited              kube-apiserver            2                   60563652ff3ff       kube-apiserver-ha-198246
	80335e9819afd       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   4d0990efdcee8       busybox-fc5497c4f-chh26
	806c3ba54cd9b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   a5394b2f1434b       coredns-7db6d8ff4d-w6w6g
	3f9784c457acb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   b57adade6ea15       coredns-7db6d8ff4d-rbnrx
	5433090bdddca       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    17 minutes ago      Exited              kindnet-cni               0                   bd5d340b4a584       kindnet-sgl8v
	c6c6220e1a7fb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      17 minutes ago      Exited              kube-proxy                0                   4aec116af531d       kube-proxy-4l79v
	2ff4075c05c48       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      17 minutes ago      Exited              kube-scheduler            0                   7c56ff7ba09a0       kube-scheduler-ha-198246
	981dfd0662596       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   0e8285057cc05       etcd-ha-198246
	
	
	==> coredns [3f9784c457acb6889b0277f9dfacd492961d6a50eb7dce9d4d142ab6269cbad7] <==
	[INFO] 10.244.0.4:41062 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090569s
	[INFO] 10.244.0.4:45221 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159605s
	[INFO] 10.244.0.4:52919 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008416s
	[INFO] 10.244.2.2:57336 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947478s
	[INFO] 10.244.2.2:58778 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148421s
	[INFO] 10.244.2.2:40534 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094901s
	[INFO] 10.244.2.2:34562 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001435891s
	[INFO] 10.244.2.2:40255 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066647s
	[INFO] 10.244.2.2:33303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074642s
	[INFO] 10.244.2.2:54865 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065816s
	[INFO] 10.244.1.2:56362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135028s
	[INFO] 10.244.1.2:50486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103508s
	[INFO] 10.244.0.4:60915 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079398s
	[INFO] 10.244.2.2:36331 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189607s
	[INFO] 10.244.1.2:44020 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000226665s
	[INFO] 10.244.1.2:47459 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129465s
	[INFO] 10.244.0.4:59992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000059798s
	[INFO] 10.244.0.4:55811 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139124s
	[INFO] 10.244.2.2:42718 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132316s
	[INFO] 10.244.2.2:34338 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000147334s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [806c3ba54cd9bb60d2b7a3f2bd270c1b24086847e2f6c457649efb77221d48ab] <==
	[INFO] 10.244.1.2:39185 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003274854s
	[INFO] 10.244.1.2:32995 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301562s
	[INFO] 10.244.1.2:57764 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00324711s
	[INFO] 10.244.0.4:43175 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001969935s
	[INFO] 10.244.0.4:47947 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090373s
	[INFO] 10.244.2.2:59435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185691s
	[INFO] 10.244.1.2:41342 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000215074s
	[INFO] 10.244.1.2:58323 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133762s
	[INFO] 10.244.0.4:48395 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131554s
	[INFO] 10.244.0.4:33157 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121525s
	[INFO] 10.244.0.4:53506 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084053s
	[INFO] 10.244.2.2:47826 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205944s
	[INFO] 10.244.2.2:43418 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113361s
	[INFO] 10.244.2.2:53197 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103281s
	[INFO] 10.244.1.2:51874 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001263s
	[INFO] 10.244.1.2:40094 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000205313s
	[INFO] 10.244.0.4:55591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001033s
	[INFO] 10.244.0.4:41281 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083191s
	[INFO] 10.244.2.2:52214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093799s
	[INFO] 10.244.2.2:55578 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146065s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cf1befd19e1e6038ebdbcf4a2a9aa74f9470c58b349a2cd545d1bb0fc1cc5c7f] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f03c4cb552619a0a1e2fbe3b91a0bbab66c325262881e5b18bba40f25384b132] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49806->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:49806->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-198246
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T18_28_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:28:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:45:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:41:42 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:41:42 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:41:42 +0000   Wed, 07 Aug 2024 18:28:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:41:42 +0000   Wed, 07 Aug 2024 18:28:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.196
	  Hostname:    ha-198246
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e31604902e0745d1a1407795d2ccbfc0
	  System UUID:                e3160490-2e07-45d1-a140-7795d2ccbfc0
	  Boot ID:                    9b0f1850-84af-432c-85c0-f24cda670347
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-chh26              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-rbnrx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-w6w6g             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-198246                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-sgl8v                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-198246             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-198246    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-4l79v                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-198246             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-198246                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 17m    kube-proxy       
	  Normal   Starting                 4m15s  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m    kubelet          Node ha-198246 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m    kubelet          Node ha-198246 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m    kubelet          Node ha-198246 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m    node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal   NodeReady                17m    kubelet          Node ha-198246 status is now: NodeReady
	  Normal   RegisteredNode           15m    node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal   RegisteredNode           14m    node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Warning  ContainerGCFailed        5m46s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m7s   node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal   RegisteredNode           4m1s   node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	  Normal   RegisteredNode           3m10s  node-controller  Node ha-198246 event: Registered Node ha-198246 in Controller
	
	
	Name:               ha-198246-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_30_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:30:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:45:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 18:44:33 +0000   Wed, 07 Aug 2024 18:44:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 18:44:33 +0000   Wed, 07 Aug 2024 18:44:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 18:44:33 +0000   Wed, 07 Aug 2024 18:44:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 18:44:33 +0000   Wed, 07 Aug 2024 18:44:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.251
	  Hostname:    ha-198246-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8eadf45fa3a45c1ace8b37287f97c9d
	  System UUID:                b8eadf45-fa3a-45c1-ace8-b37287f97c9d
	  Boot ID:                    20778be6-5f4b-49db-b89c-1662c1afc9ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g62d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-198246-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-8x6fj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-198246-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-198246-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-m5ng2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-198246-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-198246-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 3m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-198246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-198246-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-198246-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-198246-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m41s (x8 over 4m41s)  kubelet          Node ha-198246-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s (x8 over 4m41s)  kubelet          Node ha-198246-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m41s (x7 over 4m41s)  kubelet          Node ha-198246-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                   node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-198246-m02 event: Registered Node ha-198246-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-198246-m02 status is now: NodeNotReady
	
	
	Name:               ha-198246-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-198246-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=ha-198246
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T18_32_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:32:32 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-198246-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 18:43:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 07 Aug 2024 18:43:04 +0000   Wed, 07 Aug 2024 18:44:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 07 Aug 2024 18:43:04 +0000   Wed, 07 Aug 2024 18:44:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 07 Aug 2024 18:43:04 +0000   Wed, 07 Aug 2024 18:44:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 07 Aug 2024 18:43:04 +0000   Wed, 07 Aug 2024 18:44:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    ha-198246-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e050b6016e8b45679acbdd2b5c7bde62
	  System UUID:                e050b601-6e8b-4567-9acb-dd2b5c7bde62
	  Boot ID:                    5d8bf446-d965-45d0-b8f9-22abbef3d3d9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-d9znp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-5vj44              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-5ggpl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-198246-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-198246-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-198246-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-198246-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m7s                   node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-198246-m04 event: Registered Node ha-198246-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-198246-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-198246-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-198246-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-198246-m04 has been rebooted, boot id: 5d8bf446-d965-45d0-b8f9-22abbef3d3d9
	  Normal   NodeReady                2m48s                  kubelet          Node ha-198246-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 3m27s)   node-controller  Node ha-198246-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.057949] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071605] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.183672] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.110780] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.300871] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.248154] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.501138] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.062750] kauditd_printk_skb: 158 callbacks suppressed
	[Aug 7 18:28] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +0.095778] kauditd_printk_skb: 79 callbacks suppressed
	[ +15.277376] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.193932] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 7 18:30] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 7 18:40] systemd-fstab-generator[3660]: Ignoring "noauto" option for root device
	[  +0.164157] systemd-fstab-generator[3672]: Ignoring "noauto" option for root device
	[  +0.182599] systemd-fstab-generator[3686]: Ignoring "noauto" option for root device
	[  +0.155401] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.298938] systemd-fstab-generator[3726]: Ignoring "noauto" option for root device
	[  +4.468694] systemd-fstab-generator[3831]: Ignoring "noauto" option for root device
	[  +0.093451] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.661157] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 7 18:41] kauditd_printk_skb: 86 callbacks suppressed
	[ +10.168055] kauditd_printk_skb: 1 callbacks suppressed
	[ +15.835526] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.736776] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [3b11723f4426642cd84fa694cc599210a0a7263025d1c9d92bfe8a28069e1548] <==
	{"level":"info","ts":"2024-08-07T18:42:22.451171Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:42:26.935326Z","caller":"traceutil/trace.go:171","msg":"trace[15955504] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2553; }","duration":"165.270774ms","start":"2024-08-07T18:42:26.770038Z","end":"2024-08-07T18:42:26.935308Z","steps":["trace[15955504] 'process raft request'  (duration: 165.239129ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:42:26.936767Z","caller":"traceutil/trace.go:171","msg":"trace[1652254745] linearizableReadLoop","detail":"{readStateIndex:2991; appliedIndex:2994; }","duration":"157.339424ms","start":"2024-08-07T18:42:26.779407Z","end":"2024-08-07T18:42:26.936746Z","steps":["trace[1652254745] 'read index received'  (duration: 157.335572ms)","trace[1652254745] 'applied index is now lower than readState.Index'  (duration: 2.775µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T18:42:26.938531Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.991838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-w6w6g\" ","response":"range_response_count:1 size:5088"}
	{"level":"info","ts":"2024-08-07T18:42:26.939358Z","caller":"traceutil/trace.go:171","msg":"trace[561683362] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-w6w6g; range_end:; response_count:1; response_revision:2553; }","duration":"159.98222ms","start":"2024-08-07T18:42:26.779355Z","end":"2024-08-07T18:42:26.939337Z","steps":["trace[561683362] 'agreement among raft nodes before linearized reading'  (duration: 158.92817ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:42:26.939717Z","caller":"traceutil/trace.go:171","msg":"trace[254684823] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2554; }","duration":"169.480685ms","start":"2024-08-07T18:42:26.77022Z","end":"2024-08-07T18:42:26.939701Z","steps":["trace[254684823] 'process raft request'  (duration: 168.580094ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:42:26.938914Z","caller":"traceutil/trace.go:171","msg":"trace[1987519294] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2554; }","duration":"165.708781ms","start":"2024-08-07T18:42:26.773192Z","end":"2024-08-07T18:42:26.938901Z","steps":["trace[1987519294] 'process raft request'  (duration: 165.635719ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T18:43:17.87184Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.227:59344","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-07T18:43:17.888051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a14f9258d3b66c75 switched to configuration voters=(11623670073473264757 12570401416245295997)"}
	{"level":"info","ts":"2024-08-07T18:43:17.890093Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"8309c60c27e527a4","local-member-id":"a14f9258d3b66c75","removed-remote-peer-id":"8d69f1f11485af9","removed-remote-peer-urls":["https://192.168.39.227:2380"]}
	{"level":"info","ts":"2024-08-07T18:43:17.890193Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8d69f1f11485af9"}
	{"level":"warn","ts":"2024-08-07T18:43:17.890409Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:43:17.890526Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8d69f1f11485af9"}
	{"level":"warn","ts":"2024-08-07T18:43:17.890974Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:43:17.891049Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:43:17.891163Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"warn","ts":"2024-08-07T18:43:17.891537Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9","error":"context canceled"}
	{"level":"warn","ts":"2024-08-07T18:43:17.891705Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"8d69f1f11485af9","error":"failed to read 8d69f1f11485af9 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-07T18:43:17.891859Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"warn","ts":"2024-08-07T18:43:17.892157Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9","error":"context canceled"}
	{"level":"info","ts":"2024-08-07T18:43:17.892229Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:43:17.892305Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:43:17.892347Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"a14f9258d3b66c75","removed-remote-peer-id":"8d69f1f11485af9"}
	{"level":"warn","ts":"2024-08-07T18:43:17.903311Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a14f9258d3b66c75","remote-peer-id-stream-handler":"a14f9258d3b66c75","remote-peer-id-from":"8d69f1f11485af9"}
	{"level":"warn","ts":"2024-08-07T18:43:17.916709Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"a14f9258d3b66c75","remote-peer-id-stream-handler":"a14f9258d3b66c75","remote-peer-id-from":"8d69f1f11485af9"}
	
	
	==> etcd [981dfd06625965585912df3c135439314180d555b7d7f22c591a94154b8d02a5] <==
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/07 18:39:11 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-07T18:39:11.930241Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":7815312355546630082,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-07T18:39:11.969251Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a14f9258d3b66c75","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-07T18:39:11.969664Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.969743Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.969788Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.969917Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.969977Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.970108Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.97016Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ae73097cbb5e3b7d"}
	{"level":"info","ts":"2024-08-07T18:39:11.970184Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970212Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970256Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970361Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970416Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970528Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a14f9258d3b66c75","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.970544Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"8d69f1f11485af9"}
	{"level":"info","ts":"2024-08-07T18:39:11.973405Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-08-07T18:39:11.973569Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.196:2380"}
	{"level":"info","ts":"2024-08-07T18:39:11.973595Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-198246","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.196:2380"],"advertise-client-urls":["https://192.168.39.196:2379"]}
	
	
	==> kernel <==
	 18:45:52 up 18 min,  0 users,  load average: 0.17, 0.39, 0.33
	Linux ha-198246 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5433090bdddca4fefcfdb1e493e17a16a53c52556c5c400971bc85490efbe554] <==
	I0807 18:38:48.091648       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:38:48.091771       1 main.go:299] handling current node
	I0807 18:38:48.091821       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:38:48.091848       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:38:48.092071       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:38:48.092105       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:38:48.092190       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:38:48.092216       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:38:58.091080       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:38:58.091321       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:38:58.091589       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:38:58.091621       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:38:58.091694       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:38:58.091714       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:38:58.091785       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:38:58.091804       1 main.go:299] handling current node
	I0807 18:39:08.099724       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:39:08.099917       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:39:08.100125       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0807 18:39:08.100153       1 main.go:322] Node ha-198246-m03 has CIDR [10.244.2.0/24] 
	I0807 18:39:08.100270       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:39:08.100291       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:39:08.100346       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:39:08.100364       1 main.go:299] handling current node
	E0807 18:39:09.959670       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [d7cbe0ad607e5085af4ede4ab3af5205622a4884e86048c7d22c53167a952453] <==
	I0807 18:45:06.019953       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:45:16.019175       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:45:16.019305       1 main.go:299] handling current node
	I0807 18:45:16.019345       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:45:16.019377       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:45:16.019621       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:45:16.019655       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:45:26.014622       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:45:26.014764       1 main.go:299] handling current node
	I0807 18:45:26.014801       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:45:26.014823       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:45:26.015023       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:45:26.015047       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:45:36.017413       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:45:36.017502       1 main.go:299] handling current node
	I0807 18:45:36.017524       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:45:36.017531       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	I0807 18:45:36.017727       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:45:36.017756       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:45:46.019555       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0807 18:45:46.019697       1 main.go:322] Node ha-198246-m04 has CIDR [10.244.3.0/24] 
	I0807 18:45:46.020113       1 main.go:295] Handling node with IPs: map[192.168.39.196:{}]
	I0807 18:45:46.020172       1 main.go:299] handling current node
	I0807 18:45:46.020200       1 main.go:295] Handling node with IPs: map[192.168.39.251:{}]
	I0807 18:45:46.020222       1 main.go:322] Node ha-198246-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [bef4b4746f9f5ea6bfef7141760f5dbe1f34a69aa9e74758acec5dd444832b0d] <==
	I0807 18:40:55.220414       1 options.go:221] external host was not specified, using 192.168.39.196
	I0807 18:40:55.221402       1 server.go:148] Version: v1.30.3
	I0807 18:40:55.221544       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:40:55.885422       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0807 18:40:55.908598       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 18:40:55.919327       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0807 18:40:55.919418       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0807 18:40:55.919747       1 instance.go:299] Using reconciler: lease
	W0807 18:41:15.884207       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0807 18:41:15.884360       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0807 18:41:15.920844       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	W0807 18:41:15.920878       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	
	
	==> kube-apiserver [c98757fe8dd8cb8ec35f490aa796b4b06dc028d7a54a4adb683575393af070d2] <==
	I0807 18:41:39.709779       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0807 18:41:39.710215       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0807 18:41:39.710425       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 18:41:39.778749       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0807 18:41:39.787151       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 18:41:39.787192       1 policy_source.go:224] refreshing policies
	I0807 18:41:39.800020       1 shared_informer.go:320] Caches are synced for configmaps
	I0807 18:41:39.803027       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 18:41:39.805345       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0807 18:41:39.805411       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0807 18:41:39.806972       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0807 18:41:39.821935       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 18:41:39.825287       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0807 18:41:39.825736       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0807 18:41:39.826683       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0807 18:41:39.826942       1 aggregator.go:165] initial CRD sync complete...
	I0807 18:41:39.827026       1 autoregister_controller.go:141] Starting autoregister controller
	I0807 18:41:39.827053       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0807 18:41:39.827076       1 cache.go:39] Caches are synced for autoregister controller
	W0807 18:41:39.970663       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227 192.168.39.251]
	I0807 18:41:39.971940       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 18:41:39.977766       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0807 18:41:39.983275       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0807 18:41:40.709242       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0807 18:41:41.000926       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.196 192.168.39.251]
	
	
	==> kube-controller-manager [52694c1332778d9391083863ce04a544f244a010ec8a6dab0dc2ccde40e82e6b] <==
	I0807 18:43:18.550707       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.529495ms"
	I0807 18:43:18.550845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.073µs"
	I0807 18:43:29.337609       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-198246-m04"
	E0807 18:43:31.880976       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:43:31.881151       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:43:31.881194       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:43:31.881225       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:43:31.881255       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:43:51.881793       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:43:51.881893       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:43:51.881919       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:43:51.881942       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:43:51.881965       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	I0807 18:44:05.844909       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-198246-m04"
	I0807 18:44:05.969994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.761033ms"
	I0807 18:44:05.971765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.038µs"
	I0807 18:44:06.018086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.874273ms"
	I0807 18:44:06.018231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.699µs"
	E0807 18:44:11.882438       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:44:11.882580       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:44:11.882595       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:44:11.882603       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	E0807 18:44:11.882616       1 gc_controller.go:153] "Failed to get node" err="node \"ha-198246-m03\" not found" logger="pod-garbage-collector-controller" node="ha-198246-m03"
	I0807 18:44:31.893698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.501735ms"
	I0807 18:44:31.894529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="247.891µs"
	
	
	==> kube-controller-manager [a9e99c8b34ca13d3da34baef04ed9db525f88b6ff50f8d51671aeb8466f833d5] <==
	I0807 18:40:56.133957       1 serving.go:380] Generated self-signed cert in-memory
	I0807 18:40:56.419739       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0807 18:40:56.419779       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:40:56.421777       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 18:40:56.421919       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0807 18:40:56.422476       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 18:40:56.422378       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0807 18:41:16.927071       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.196:8443/healthz\": dial tcp 192.168.39.196:8443: connect: connection refused"
	
	
	==> kube-proxy [1ceccc741c65b5d949cea547dcd00b2733112b35f535afec91b15af1656ef0e8] <==
	I0807 18:41:36.578389       1 config.go:192] "Starting service config controller"
	I0807 18:41:36.578436       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 18:41:36.578551       1 config.go:101] "Starting endpoint slice config controller"
	I0807 18:41:36.578571       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 18:41:36.579294       1 config.go:319] "Starting node config controller"
	I0807 18:41:36.579336       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0807 18:41:39.616952       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:39.617301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:39.617690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:39.617478       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:39.617894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:39.617590       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0807 18:41:39.617973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:42.647346       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:42.647524       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:42.647634       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:42.647669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:41:42.647842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:41:42.647900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0807 18:41:45.279637       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 18:41:45.478687       1 shared_informer.go:320] Caches are synced for service config
	I0807 18:41:45.679416       1 shared_informer.go:320] Caches are synced for node config
	W0807 18:44:30.141427       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0807 18:44:30.141712       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0807 18:44:30.141789       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [c6c6220e1a7fbef5b46d57389b28bee4893fdbc5539c50d458ea957d20f1c8f8] <==
	E0807 18:38:07.606371       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:10.678939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:10.679281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:10.679531       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:10.679629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:10.679930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:10.679995       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:16.823926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:16.824041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:16.824334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:16.824425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:16.824619       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:16.824685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:26.039315       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:26.040083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:26.040368       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:26.040530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:29.110847       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:29.111109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:44.471365       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:44.471507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:44.471728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-198246&resourceVersion=2083": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:44.471767       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2108": dial tcp 192.168.39.254:8443: connect: no route to host
	W0807 18:38:50.615780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	E0807 18:38:50.615956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2147": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [2ff4075c05c488ae3a7c359a71002929eccbca12733ebea95430cac76bd7ce56] <==
	W0807 18:39:04.616635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0807 18:39:04.616746       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0807 18:39:04.720177       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 18:39:04.720265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:39:04.899572       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0807 18:39:04.899659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0807 18:39:05.052221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:39:05.052345       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0807 18:39:05.344248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 18:39:05.344378       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 18:39:05.409802       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0807 18:39:05.409852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0807 18:39:05.476009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:39:05.476053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:39:05.481275       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 18:39:05.481369       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 18:39:05.873604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0807 18:39:05.873714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0807 18:39:05.888981       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0807 18:39:05.889098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0807 18:39:10.670361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0807 18:39:10.670540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0807 18:39:11.563080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:39:11.563140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:39:11.664228       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c570124d662707a6e166aa3c681f04bf036e2629f0e173541fa8178d4bb2804c] <==
	W0807 18:41:33.375051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.196:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:33.375227       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.196:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:34.110708       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.196:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:34.110851       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.196:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:34.775939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.196:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:34.776016       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.196:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:34.879403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.196:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:34.879562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.196:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:35.625373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.196:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:35.625570       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.196:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:35.867261       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.196:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:35.867392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.196:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.111878       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.196:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.112054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.196:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.209372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.196:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.209435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.196:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.218105       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.196:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.218162       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.196:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.456926       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.456994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:36.899602       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:36.899685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	W0807 18:41:37.281783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	E0807 18:41:37.281838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.196:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.196:8443: connect: connection refused
	I0807 18:41:58.933040       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 18:42:06 ha-198246 kubelet[1372]: E0807 18:42:06.763636    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:42:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:42:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:42:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:42:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:42:13 ha-198246 kubelet[1372]: I0807 18:42:13.739005    1372 scope.go:117] "RemoveContainer" containerID="0336639d7a74d44f5a4e8759063231aa51a46920b143c3535f6572521927c20a"
	Aug 07 18:42:14 ha-198246 kubelet[1372]: I0807 18:42:14.981581    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-chh26" podStartSLOduration=617.287881698 podStartE2EDuration="10m19.981537865s" podCreationTimestamp="2024-08-07 18:31:55 +0000 UTC" firstStartedPulling="2024-08-07 18:31:56.392612234 +0000 UTC m=+229.818597257" lastFinishedPulling="2024-08-07 18:31:59.086268404 +0000 UTC m=+232.512253424" observedRunningTime="2024-08-07 18:31:59.764202578 +0000 UTC m=+233.190187619" watchObservedRunningTime="2024-08-07 18:42:14.981537865 +0000 UTC m=+848.407522911"
	Aug 07 18:42:31 ha-198246 kubelet[1372]: I0807 18:42:31.739129    1372 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-198246" podUID="a230b27d-cbec-4a1a-a7e7-7192f3de3915"
	Aug 07 18:42:31 ha-198246 kubelet[1372]: I0807 18:42:31.761487    1372 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-198246"
	Aug 07 18:42:36 ha-198246 kubelet[1372]: I0807 18:42:36.763710    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-198246" podStartSLOduration=5.763681356 podStartE2EDuration="5.763681356s" podCreationTimestamp="2024-08-07 18:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-07 18:42:36.760068714 +0000 UTC m=+870.186053754" watchObservedRunningTime="2024-08-07 18:42:36.763681356 +0000 UTC m=+870.189666414"
	Aug 07 18:43:06 ha-198246 kubelet[1372]: E0807 18:43:06.761905    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:43:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:43:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:43:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:43:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:44:06 ha-198246 kubelet[1372]: E0807 18:44:06.761348    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:44:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:44:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:44:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:44:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 18:45:06 ha-198246 kubelet[1372]: E0807 18:45:06.761006    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 18:45:06 ha-198246 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 18:45:06 ha-198246 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 18:45:06 ha-198246 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 18:45:06 ha-198246 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 18:45:51.115210   53251 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19389-20864/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-198246 -n ha-198246
helpers_test.go:261: (dbg) Run:  kubectl --context ha-198246 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (330.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-334028
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-334028
E0807 19:01:31.079456   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-334028: exit status 82 (2m1.874881362s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-334028-m03"  ...
	* Stopping node "multinode-334028-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-334028" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334028 --wait=true -v=8 --alsologtostderr
E0807 19:04:34.127394   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 19:06:31.077114   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334028 --wait=true -v=8 --alsologtostderr: (3m25.874667167s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-334028
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-334028 -n multinode-334028
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-334028 logs -n 25: (1.622641344s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m02:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1317190128/001/cp-test_multinode-334028-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m02:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028:/home/docker/cp-test_multinode-334028-m02_multinode-334028.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n multinode-334028 sudo cat                                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_multinode-334028-m02_multinode-334028.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m02:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03:/home/docker/cp-test_multinode-334028-m02_multinode-334028-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n multinode-334028-m03 sudo cat                                   | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_multinode-334028-m02_multinode-334028-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp testdata/cp-test.txt                                                | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m03:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1317190128/001/cp-test_multinode-334028-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m03:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028:/home/docker/cp-test_multinode-334028-m03_multinode-334028.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n multinode-334028 sudo cat                                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_multinode-334028-m03_multinode-334028.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m03:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m02:/home/docker/cp-test_multinode-334028-m03_multinode-334028-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n multinode-334028-m02 sudo cat                                   | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_multinode-334028-m03_multinode-334028-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-334028 node stop m03                                                          | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	| node    | multinode-334028 node start                                                             | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-334028                                                                | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:01 UTC |                     |
	| stop    | -p multinode-334028                                                                     | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:01 UTC |                     |
	| start   | -p multinode-334028                                                                     | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:03 UTC | 07 Aug 24 19:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-334028                                                                | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 19:03:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 19:03:13.104163   62561 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:03:13.104475   62561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:03:13.104493   62561 out.go:304] Setting ErrFile to fd 2...
	I0807 19:03:13.104498   62561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:03:13.105274   62561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 19:03:13.106229   62561 out.go:298] Setting JSON to false
	I0807 19:03:13.107177   62561 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9939,"bootTime":1723047454,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 19:03:13.107240   62561 start.go:139] virtualization: kvm guest
	I0807 19:03:13.109614   62561 out.go:177] * [multinode-334028] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 19:03:13.111122   62561 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:03:13.111140   62561 notify.go:220] Checking for updates...
	I0807 19:03:13.113644   62561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:03:13.115026   62561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 19:03:13.116310   62561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:03:13.117522   62561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 19:03:13.118718   62561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:03:13.120512   62561 config.go:182] Loaded profile config "multinode-334028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:03:13.120626   62561 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:03:13.121063   62561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:03:13.121138   62561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:03:13.136130   62561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0807 19:03:13.136584   62561 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:03:13.137148   62561 main.go:141] libmachine: Using API Version  1
	I0807 19:03:13.137170   62561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:03:13.137496   62561 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:03:13.137688   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:03:13.174896   62561 out.go:177] * Using the kvm2 driver based on existing profile
	I0807 19:03:13.176338   62561 start.go:297] selected driver: kvm2
	I0807 19:03:13.176353   62561 start.go:901] validating driver "kvm2" against &{Name:multinode-334028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-334028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.119 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:03:13.176492   62561 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:03:13.176798   62561 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:03:13.176879   62561 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 19:03:13.192168   62561 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 19:03:13.192974   62561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 19:03:13.193050   62561 cni.go:84] Creating CNI manager for ""
	I0807 19:03:13.193062   62561 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 19:03:13.193139   62561 start.go:340] cluster config:
	{Name:multinode-334028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-334028 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.119 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:03:13.193275   62561 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:03:13.196045   62561 out.go:177] * Starting "multinode-334028" primary control-plane node in "multinode-334028" cluster
	I0807 19:03:13.197419   62561 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:03:13.197461   62561 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 19:03:13.197470   62561 cache.go:56] Caching tarball of preloaded images
	I0807 19:03:13.197558   62561 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 19:03:13.197571   62561 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 19:03:13.197686   62561 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/config.json ...
	I0807 19:03:13.197934   62561 start.go:360] acquireMachinesLock for multinode-334028: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 19:03:13.197983   62561 start.go:364] duration metric: took 29.135µs to acquireMachinesLock for "multinode-334028"
	I0807 19:03:13.198006   62561 start.go:96] Skipping create...Using existing machine configuration
	I0807 19:03:13.198016   62561 fix.go:54] fixHost starting: 
	I0807 19:03:13.198309   62561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:03:13.198347   62561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:03:13.213169   62561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0807 19:03:13.213617   62561 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:03:13.214105   62561 main.go:141] libmachine: Using API Version  1
	I0807 19:03:13.214127   62561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:03:13.214422   62561 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:03:13.214627   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:03:13.214777   62561 main.go:141] libmachine: (multinode-334028) Calling .GetState
	I0807 19:03:13.216263   62561 fix.go:112] recreateIfNeeded on multinode-334028: state=Running err=<nil>
	W0807 19:03:13.216283   62561 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 19:03:13.218855   62561 out.go:177] * Updating the running kvm2 "multinode-334028" VM ...
	I0807 19:03:13.220051   62561 machine.go:94] provisionDockerMachine start ...
	I0807 19:03:13.220074   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:03:13.220285   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.222913   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.223258   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.223286   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.223455   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:13.223652   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.223809   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.223935   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:13.224078   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:03:13.224295   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:03:13.224306   62561 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 19:03:13.338146   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-334028
	
	I0807 19:03:13.338175   62561 main.go:141] libmachine: (multinode-334028) Calling .GetMachineName
	I0807 19:03:13.338423   62561 buildroot.go:166] provisioning hostname "multinode-334028"
	I0807 19:03:13.338450   62561 main.go:141] libmachine: (multinode-334028) Calling .GetMachineName
	I0807 19:03:13.338627   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.341313   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.341646   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.341685   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.341778   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:13.342001   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.342158   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.342303   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:13.342416   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:03:13.342650   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:03:13.342665   62561 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-334028 && echo "multinode-334028" | sudo tee /etc/hostname
	I0807 19:03:13.473834   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-334028
	
	I0807 19:03:13.473881   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.476966   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.477394   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.477432   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.477674   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:13.477864   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.478020   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.478159   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:13.478333   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:03:13.478529   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:03:13.478552   62561 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-334028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-334028/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-334028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:03:13.589563   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:03:13.589589   62561 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 19:03:13.589610   62561 buildroot.go:174] setting up certificates
	I0807 19:03:13.589621   62561 provision.go:84] configureAuth start
	I0807 19:03:13.589631   62561 main.go:141] libmachine: (multinode-334028) Calling .GetMachineName
	I0807 19:03:13.589964   62561 main.go:141] libmachine: (multinode-334028) Calling .GetIP
	I0807 19:03:13.593015   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.593367   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.593396   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.593547   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.595856   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.596236   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.596262   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.596363   62561 provision.go:143] copyHostCerts
	I0807 19:03:13.596406   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 19:03:13.596441   62561 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 19:03:13.596450   62561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 19:03:13.596511   62561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 19:03:13.596598   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 19:03:13.596616   62561 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 19:03:13.596623   62561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 19:03:13.596646   62561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 19:03:13.596721   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 19:03:13.596745   62561 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 19:03:13.596754   62561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 19:03:13.596792   62561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 19:03:13.596903   62561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.multinode-334028 san=[127.0.0.1 192.168.39.165 localhost minikube multinode-334028]
	I0807 19:03:13.908252   62561 provision.go:177] copyRemoteCerts
	I0807 19:03:13.908300   62561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:03:13.908320   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.911086   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.911445   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.911472   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.911604   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:13.911810   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.911989   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:13.912161   62561 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028/id_rsa Username:docker}
	I0807 19:03:14.000790   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 19:03:14.000868   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0807 19:03:14.027352   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 19:03:14.027438   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 19:03:14.053053   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 19:03:14.053149   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:03:14.079199   62561 provision.go:87] duration metric: took 489.565657ms to configureAuth
	I0807 19:03:14.079230   62561 buildroot.go:189] setting minikube options for container-runtime
	I0807 19:03:14.079506   62561 config.go:182] Loaded profile config "multinode-334028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:03:14.079574   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:14.082171   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:14.082575   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:14.082611   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:14.082726   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:14.082935   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:14.083142   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:14.083283   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:14.083459   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:03:14.083632   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:03:14.083652   62561 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 19:04:44.938259   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 19:04:44.938313   62561 machine.go:97] duration metric: took 1m31.718245018s to provisionDockerMachine
	I0807 19:04:44.938336   62561 start.go:293] postStartSetup for "multinode-334028" (driver="kvm2")
	I0807 19:04:44.938367   62561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:04:44.938401   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:44.938805   62561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:04:44.938841   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:04:44.941641   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:44.942157   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:44.942183   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:44.942354   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:04:44.942534   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:44.942681   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:04:44.942808   62561 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028/id_rsa Username:docker}
	I0807 19:04:45.028125   62561 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:04:45.032378   62561 command_runner.go:130] > NAME=Buildroot
	I0807 19:04:45.032397   62561 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0807 19:04:45.032401   62561 command_runner.go:130] > ID=buildroot
	I0807 19:04:45.032406   62561 command_runner.go:130] > VERSION_ID=2023.02.9
	I0807 19:04:45.032410   62561 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0807 19:04:45.032448   62561 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 19:04:45.032471   62561 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 19:04:45.032540   62561 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 19:04:45.032646   62561 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 19:04:45.032657   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 19:04:45.032776   62561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:04:45.042268   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:04:45.066739   62561 start.go:296] duration metric: took 128.385682ms for postStartSetup
	I0807 19:04:45.066789   62561 fix.go:56] duration metric: took 1m31.868773792s for fixHost
	I0807 19:04:45.066812   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:04:45.069537   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.069885   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:45.069914   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.070103   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:04:45.070313   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:45.070484   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:45.070678   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:04:45.070843   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:04:45.071054   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:04:45.071071   62561 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 19:04:45.177010   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723057485.147190184
	
	I0807 19:04:45.177029   62561 fix.go:216] guest clock: 1723057485.147190184
	I0807 19:04:45.177041   62561 fix.go:229] Guest: 2024-08-07 19:04:45.147190184 +0000 UTC Remote: 2024-08-07 19:04:45.066795772 +0000 UTC m=+91.996777253 (delta=80.394412ms)
	I0807 19:04:45.177071   62561 fix.go:200] guest clock delta is within tolerance: 80.394412ms
	I0807 19:04:45.177081   62561 start.go:83] releasing machines lock for "multinode-334028", held for 1m31.979083311s
	I0807 19:04:45.177109   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:45.177416   62561 main.go:141] libmachine: (multinode-334028) Calling .GetIP
	I0807 19:04:45.179985   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.180377   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:45.180406   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.180615   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:45.181084   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:45.181206   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:45.181302   62561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 19:04:45.181343   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:04:45.181443   62561 ssh_runner.go:195] Run: cat /version.json
	I0807 19:04:45.181467   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:04:45.183901   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.184174   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.184261   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:45.184294   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.184408   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:04:45.184546   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:45.184577   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:45.184585   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.184745   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:04:45.184755   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:04:45.184884   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:45.184924   62561 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028/id_rsa Username:docker}
	I0807 19:04:45.185004   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:04:45.185124   62561 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028/id_rsa Username:docker}
	I0807 19:04:45.261525   62561 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0807 19:04:45.261741   62561 ssh_runner.go:195] Run: systemctl --version
	I0807 19:04:45.285606   62561 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0807 19:04:45.285664   62561 command_runner.go:130] > systemd 252 (252)
	I0807 19:04:45.285691   62561 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0807 19:04:45.285742   62561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 19:04:45.450249   62561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 19:04:45.456241   62561 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0807 19:04:45.456285   62561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 19:04:45.456324   62561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:04:45.466082   62561 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 19:04:45.466103   62561 start.go:495] detecting cgroup driver to use...
	I0807 19:04:45.466172   62561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 19:04:45.483826   62561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:04:45.498722   62561 docker.go:217] disabling cri-docker service (if available) ...
	I0807 19:04:45.498788   62561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 19:04:45.513573   62561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 19:04:45.527955   62561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 19:04:45.674021   62561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 19:04:45.821302   62561 docker.go:233] disabling docker service ...
	I0807 19:04:45.821375   62561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 19:04:45.840625   62561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 19:04:45.855014   62561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 19:04:45.996881   62561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 19:04:46.144443   62561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 19:04:46.159398   62561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:04:46.178240   62561 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0807 19:04:46.178278   62561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 19:04:46.178320   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.189411   62561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 19:04:46.189477   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.200374   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.211826   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.222485   62561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:04:46.233933   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.244713   62561 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.255921   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.267281   62561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:04:46.277324   62561 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0807 19:04:46.277440   62561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:04:46.287285   62561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:04:46.425101   62561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 19:04:46.940793   62561 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 19:04:46.940874   62561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 19:04:46.945728   62561 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0807 19:04:46.945755   62561 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0807 19:04:46.945764   62561 command_runner.go:130] > Device: 0,22	Inode: 1367        Links: 1
	I0807 19:04:46.945775   62561 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 19:04:46.945782   62561 command_runner.go:130] > Access: 2024-08-07 19:04:46.804699657 +0000
	I0807 19:04:46.945796   62561 command_runner.go:130] > Modify: 2024-08-07 19:04:46.804699657 +0000
	I0807 19:04:46.945804   62561 command_runner.go:130] > Change: 2024-08-07 19:04:46.804699657 +0000
	I0807 19:04:46.945809   62561 command_runner.go:130] >  Birth: -
	I0807 19:04:46.945848   62561 start.go:563] Will wait 60s for crictl version
	I0807 19:04:46.945893   62561 ssh_runner.go:195] Run: which crictl
	I0807 19:04:46.949650   62561 command_runner.go:130] > /usr/bin/crictl
	I0807 19:04:46.949710   62561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:04:46.988867   62561 command_runner.go:130] > Version:  0.1.0
	I0807 19:04:46.988983   62561 command_runner.go:130] > RuntimeName:  cri-o
	I0807 19:04:46.989060   62561 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0807 19:04:46.989115   62561 command_runner.go:130] > RuntimeApiVersion:  v1
	I0807 19:04:46.990356   62561 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 19:04:46.990431   62561 ssh_runner.go:195] Run: crio --version
	I0807 19:04:47.020531   62561 command_runner.go:130] > crio version 1.29.1
	I0807 19:04:47.020552   62561 command_runner.go:130] > Version:        1.29.1
	I0807 19:04:47.020558   62561 command_runner.go:130] > GitCommit:      unknown
	I0807 19:04:47.020562   62561 command_runner.go:130] > GitCommitDate:  unknown
	I0807 19:04:47.020572   62561 command_runner.go:130] > GitTreeState:   clean
	I0807 19:04:47.020577   62561 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0807 19:04:47.020581   62561 command_runner.go:130] > GoVersion:      go1.21.6
	I0807 19:04:47.020585   62561 command_runner.go:130] > Compiler:       gc
	I0807 19:04:47.020590   62561 command_runner.go:130] > Platform:       linux/amd64
	I0807 19:04:47.020595   62561 command_runner.go:130] > Linkmode:       dynamic
	I0807 19:04:47.020602   62561 command_runner.go:130] > BuildTags:      
	I0807 19:04:47.020608   62561 command_runner.go:130] >   containers_image_ostree_stub
	I0807 19:04:47.020614   62561 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0807 19:04:47.020620   62561 command_runner.go:130] >   btrfs_noversion
	I0807 19:04:47.020628   62561 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0807 19:04:47.020638   62561 command_runner.go:130] >   libdm_no_deferred_remove
	I0807 19:04:47.020644   62561 command_runner.go:130] >   seccomp
	I0807 19:04:47.020650   62561 command_runner.go:130] > LDFlags:          unknown
	I0807 19:04:47.020657   62561 command_runner.go:130] > SeccompEnabled:   true
	I0807 19:04:47.020664   62561 command_runner.go:130] > AppArmorEnabled:  false
	I0807 19:04:47.020760   62561 ssh_runner.go:195] Run: crio --version
	I0807 19:04:47.049327   62561 command_runner.go:130] > crio version 1.29.1
	I0807 19:04:47.049354   62561 command_runner.go:130] > Version:        1.29.1
	I0807 19:04:47.049362   62561 command_runner.go:130] > GitCommit:      unknown
	I0807 19:04:47.049369   62561 command_runner.go:130] > GitCommitDate:  unknown
	I0807 19:04:47.049376   62561 command_runner.go:130] > GitTreeState:   clean
	I0807 19:04:47.049384   62561 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0807 19:04:47.049391   62561 command_runner.go:130] > GoVersion:      go1.21.6
	I0807 19:04:47.049397   62561 command_runner.go:130] > Compiler:       gc
	I0807 19:04:47.049404   62561 command_runner.go:130] > Platform:       linux/amd64
	I0807 19:04:47.049412   62561 command_runner.go:130] > Linkmode:       dynamic
	I0807 19:04:47.049421   62561 command_runner.go:130] > BuildTags:      
	I0807 19:04:47.049430   62561 command_runner.go:130] >   containers_image_ostree_stub
	I0807 19:04:47.049434   62561 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0807 19:04:47.049439   62561 command_runner.go:130] >   btrfs_noversion
	I0807 19:04:47.049445   62561 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0807 19:04:47.049455   62561 command_runner.go:130] >   libdm_no_deferred_remove
	I0807 19:04:47.049460   62561 command_runner.go:130] >   seccomp
	I0807 19:04:47.049467   62561 command_runner.go:130] > LDFlags:          unknown
	I0807 19:04:47.049476   62561 command_runner.go:130] > SeccompEnabled:   true
	I0807 19:04:47.049485   62561 command_runner.go:130] > AppArmorEnabled:  false
	I0807 19:04:47.051588   62561 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 19:04:47.052984   62561 main.go:141] libmachine: (multinode-334028) Calling .GetIP
	I0807 19:04:47.055837   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:47.056213   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:47.056243   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:47.056481   62561 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 19:04:47.060533   62561 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0807 19:04:47.060724   62561 kubeadm.go:883] updating cluster {Name:multinode-334028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-334028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.119 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:04:47.060866   62561 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:04:47.060946   62561 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:04:47.116977   62561 command_runner.go:130] > {
	I0807 19:04:47.117002   62561 command_runner.go:130] >   "images": [
	I0807 19:04:47.117006   62561 command_runner.go:130] >     {
	I0807 19:04:47.117014   62561 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0807 19:04:47.117019   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117025   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0807 19:04:47.117029   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117036   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117058   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0807 19:04:47.117073   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0807 19:04:47.117080   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117092   62561 command_runner.go:130] >       "size": "87165492",
	I0807 19:04:47.117102   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117110   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117117   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117122   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117126   62561 command_runner.go:130] >     },
	I0807 19:04:47.117130   62561 command_runner.go:130] >     {
	I0807 19:04:47.117136   62561 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0807 19:04:47.117141   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117146   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0807 19:04:47.117151   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117155   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117166   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0807 19:04:47.117176   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0807 19:04:47.117181   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117186   62561 command_runner.go:130] >       "size": "87165492",
	I0807 19:04:47.117190   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117203   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117217   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117225   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117229   62561 command_runner.go:130] >     },
	I0807 19:04:47.117233   62561 command_runner.go:130] >     {
	I0807 19:04:47.117239   62561 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0807 19:04:47.117246   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117252   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0807 19:04:47.117258   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117262   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117272   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0807 19:04:47.117279   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0807 19:04:47.117285   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117289   62561 command_runner.go:130] >       "size": "1363676",
	I0807 19:04:47.117295   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117300   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117306   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117310   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117317   62561 command_runner.go:130] >     },
	I0807 19:04:47.117320   62561 command_runner.go:130] >     {
	I0807 19:04:47.117329   62561 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0807 19:04:47.117336   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117341   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0807 19:04:47.117347   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117351   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117361   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0807 19:04:47.117379   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0807 19:04:47.117387   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117391   62561 command_runner.go:130] >       "size": "31470524",
	I0807 19:04:47.117398   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117402   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117408   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117412   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117418   62561 command_runner.go:130] >     },
	I0807 19:04:47.117422   62561 command_runner.go:130] >     {
	I0807 19:04:47.117431   62561 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0807 19:04:47.117437   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117448   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0807 19:04:47.117455   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117460   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117470   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0807 19:04:47.117479   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0807 19:04:47.117485   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117490   62561 command_runner.go:130] >       "size": "61245718",
	I0807 19:04:47.117497   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117501   62561 command_runner.go:130] >       "username": "nonroot",
	I0807 19:04:47.117508   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117512   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117518   62561 command_runner.go:130] >     },
	I0807 19:04:47.117522   62561 command_runner.go:130] >     {
	I0807 19:04:47.117529   62561 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0807 19:04:47.117535   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117540   62561 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0807 19:04:47.117546   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117551   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117558   62561 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0807 19:04:47.117566   62561 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0807 19:04:47.117573   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117577   62561 command_runner.go:130] >       "size": "150779692",
	I0807 19:04:47.117584   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.117588   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.117594   62561 command_runner.go:130] >       },
	I0807 19:04:47.117598   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117604   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117609   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117612   62561 command_runner.go:130] >     },
	I0807 19:04:47.117616   62561 command_runner.go:130] >     {
	I0807 19:04:47.117622   62561 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0807 19:04:47.117628   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117634   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0807 19:04:47.117638   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117643   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117652   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0807 19:04:47.117665   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0807 19:04:47.117672   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117676   62561 command_runner.go:130] >       "size": "117609954",
	I0807 19:04:47.117680   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.117687   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.117691   62561 command_runner.go:130] >       },
	I0807 19:04:47.117697   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117702   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117708   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117711   62561 command_runner.go:130] >     },
	I0807 19:04:47.117715   62561 command_runner.go:130] >     {
	I0807 19:04:47.117721   62561 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0807 19:04:47.117726   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117731   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0807 19:04:47.117737   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117741   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117761   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0807 19:04:47.117774   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0807 19:04:47.117781   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117786   62561 command_runner.go:130] >       "size": "112198984",
	I0807 19:04:47.117792   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.117796   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.117825   62561 command_runner.go:130] >       },
	I0807 19:04:47.117832   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117837   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117841   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117845   62561 command_runner.go:130] >     },
	I0807 19:04:47.117848   62561 command_runner.go:130] >     {
	I0807 19:04:47.117854   62561 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0807 19:04:47.117858   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117863   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0807 19:04:47.117866   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117870   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117877   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0807 19:04:47.117884   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0807 19:04:47.117887   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117897   62561 command_runner.go:130] >       "size": "85953945",
	I0807 19:04:47.117902   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117905   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117909   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117912   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117915   62561 command_runner.go:130] >     },
	I0807 19:04:47.117918   62561 command_runner.go:130] >     {
	I0807 19:04:47.117924   62561 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0807 19:04:47.117930   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117935   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0807 19:04:47.117938   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117942   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117949   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0807 19:04:47.117959   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0807 19:04:47.117963   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117969   62561 command_runner.go:130] >       "size": "63051080",
	I0807 19:04:47.117973   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.117979   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.117983   62561 command_runner.go:130] >       },
	I0807 19:04:47.117990   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117999   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.118006   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.118009   62561 command_runner.go:130] >     },
	I0807 19:04:47.118013   62561 command_runner.go:130] >     {
	I0807 19:04:47.118019   62561 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0807 19:04:47.118038   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.118051   62561 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0807 19:04:47.118060   62561 command_runner.go:130] >       ],
	I0807 19:04:47.118065   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.118074   62561 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0807 19:04:47.118081   62561 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0807 19:04:47.118088   62561 command_runner.go:130] >       ],
	I0807 19:04:47.118093   62561 command_runner.go:130] >       "size": "750414",
	I0807 19:04:47.118096   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.118102   62561 command_runner.go:130] >         "value": "65535"
	I0807 19:04:47.118106   62561 command_runner.go:130] >       },
	I0807 19:04:47.118119   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.118124   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.118128   62561 command_runner.go:130] >       "pinned": true
	I0807 19:04:47.118131   62561 command_runner.go:130] >     }
	I0807 19:04:47.118134   62561 command_runner.go:130] >   ]
	I0807 19:04:47.118138   62561 command_runner.go:130] > }
	I0807 19:04:47.118327   62561 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:04:47.118339   62561 crio.go:433] Images already preloaded, skipping extraction
	I0807 19:04:47.118388   62561 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:04:47.153558   62561 command_runner.go:130] > {
	I0807 19:04:47.153582   62561 command_runner.go:130] >   "images": [
	I0807 19:04:47.153587   62561 command_runner.go:130] >     {
	I0807 19:04:47.153595   62561 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0807 19:04:47.153606   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.153613   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0807 19:04:47.153618   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153625   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.153658   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0807 19:04:47.153676   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0807 19:04:47.153682   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153686   62561 command_runner.go:130] >       "size": "87165492",
	I0807 19:04:47.153690   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.153693   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.153700   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.153705   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.153708   62561 command_runner.go:130] >     },
	I0807 19:04:47.153711   62561 command_runner.go:130] >     {
	I0807 19:04:47.153717   62561 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0807 19:04:47.153721   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.153730   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0807 19:04:47.153736   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153742   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.153757   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0807 19:04:47.153772   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0807 19:04:47.153779   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153787   62561 command_runner.go:130] >       "size": "87165492",
	I0807 19:04:47.153794   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.153800   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.153806   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.153810   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.153814   62561 command_runner.go:130] >     },
	I0807 19:04:47.153820   62561 command_runner.go:130] >     {
	I0807 19:04:47.153834   62561 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0807 19:04:47.153840   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.153852   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0807 19:04:47.153861   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153870   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.153884   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0807 19:04:47.153899   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0807 19:04:47.153907   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153916   62561 command_runner.go:130] >       "size": "1363676",
	I0807 19:04:47.153926   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.153935   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.153943   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.153952   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.153961   62561 command_runner.go:130] >     },
	I0807 19:04:47.153969   62561 command_runner.go:130] >     {
	I0807 19:04:47.153982   62561 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0807 19:04:47.153989   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.153994   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0807 19:04:47.154003   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154013   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154025   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0807 19:04:47.154044   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0807 19:04:47.154053   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154062   62561 command_runner.go:130] >       "size": "31470524",
	I0807 19:04:47.154069   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.154078   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154083   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154088   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154095   62561 command_runner.go:130] >     },
	I0807 19:04:47.154104   62561 command_runner.go:130] >     {
	I0807 19:04:47.154114   62561 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0807 19:04:47.154124   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154135   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0807 19:04:47.154143   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154153   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154173   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0807 19:04:47.154185   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0807 19:04:47.154193   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154201   62561 command_runner.go:130] >       "size": "61245718",
	I0807 19:04:47.154211   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.154221   62561 command_runner.go:130] >       "username": "nonroot",
	I0807 19:04:47.154230   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154239   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154249   62561 command_runner.go:130] >     },
	I0807 19:04:47.154257   62561 command_runner.go:130] >     {
	I0807 19:04:47.154267   62561 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0807 19:04:47.154273   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154278   62561 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0807 19:04:47.154287   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154296   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154307   62561 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0807 19:04:47.154322   62561 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0807 19:04:47.154330   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154339   62561 command_runner.go:130] >       "size": "150779692",
	I0807 19:04:47.154348   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154357   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.154364   62561 command_runner.go:130] >       },
	I0807 19:04:47.154368   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154376   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154382   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154390   62561 command_runner.go:130] >     },
	I0807 19:04:47.154396   62561 command_runner.go:130] >     {
	I0807 19:04:47.154410   62561 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0807 19:04:47.154417   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154425   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0807 19:04:47.154430   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154436   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154447   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0807 19:04:47.154457   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0807 19:04:47.154462   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154468   62561 command_runner.go:130] >       "size": "117609954",
	I0807 19:04:47.154473   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154480   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.154484   62561 command_runner.go:130] >       },
	I0807 19:04:47.154490   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154495   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154501   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154507   62561 command_runner.go:130] >     },
	I0807 19:04:47.154511   62561 command_runner.go:130] >     {
	I0807 19:04:47.154522   62561 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0807 19:04:47.154529   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154538   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0807 19:04:47.154547   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154554   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154576   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0807 19:04:47.154590   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0807 19:04:47.154596   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154600   62561 command_runner.go:130] >       "size": "112198984",
	I0807 19:04:47.154606   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154610   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.154613   62561 command_runner.go:130] >       },
	I0807 19:04:47.154617   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154622   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154625   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154631   62561 command_runner.go:130] >     },
	I0807 19:04:47.154634   62561 command_runner.go:130] >     {
	I0807 19:04:47.154640   62561 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0807 19:04:47.154646   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154651   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0807 19:04:47.154657   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154660   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154669   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0807 19:04:47.154678   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0807 19:04:47.154683   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154687   62561 command_runner.go:130] >       "size": "85953945",
	I0807 19:04:47.154693   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.154697   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154701   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154707   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154710   62561 command_runner.go:130] >     },
	I0807 19:04:47.154722   62561 command_runner.go:130] >     {
	I0807 19:04:47.154730   62561 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0807 19:04:47.154736   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154743   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0807 19:04:47.154749   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154754   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154763   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0807 19:04:47.154772   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0807 19:04:47.154778   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154782   62561 command_runner.go:130] >       "size": "63051080",
	I0807 19:04:47.154785   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154791   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.154795   62561 command_runner.go:130] >       },
	I0807 19:04:47.154800   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154804   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154810   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154814   62561 command_runner.go:130] >     },
	I0807 19:04:47.154819   62561 command_runner.go:130] >     {
	I0807 19:04:47.154825   62561 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0807 19:04:47.154831   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154835   62561 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0807 19:04:47.154838   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154842   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154848   62561 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0807 19:04:47.154857   62561 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0807 19:04:47.154863   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154867   62561 command_runner.go:130] >       "size": "750414",
	I0807 19:04:47.154872   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154878   62561 command_runner.go:130] >         "value": "65535"
	I0807 19:04:47.154881   62561 command_runner.go:130] >       },
	I0807 19:04:47.154888   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154891   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154898   62561 command_runner.go:130] >       "pinned": true
	I0807 19:04:47.154901   62561 command_runner.go:130] >     }
	I0807 19:04:47.154909   62561 command_runner.go:130] >   ]
	I0807 19:04:47.154912   62561 command_runner.go:130] > }
	I0807 19:04:47.155021   62561 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:04:47.155032   62561 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:04:47.155038   62561 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.30.3 crio true true} ...
	I0807 19:04:47.155136   62561 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-334028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-334028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:04:47.155199   62561 ssh_runner.go:195] Run: crio config
	I0807 19:04:47.197573   62561 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0807 19:04:47.197604   62561 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0807 19:04:47.197614   62561 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0807 19:04:47.197620   62561 command_runner.go:130] > #
	I0807 19:04:47.197631   62561 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0807 19:04:47.197642   62561 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0807 19:04:47.197652   62561 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0807 19:04:47.197662   62561 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0807 19:04:47.197667   62561 command_runner.go:130] > # reload'.
	I0807 19:04:47.197675   62561 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0807 19:04:47.197688   62561 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0807 19:04:47.197700   62561 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0807 19:04:47.197712   62561 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0807 19:04:47.197720   62561 command_runner.go:130] > [crio]
	I0807 19:04:47.197729   62561 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0807 19:04:47.197739   62561 command_runner.go:130] > # containers images, in this directory.
	I0807 19:04:47.197814   62561 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0807 19:04:47.197837   62561 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0807 19:04:47.197847   62561 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0807 19:04:47.197860   62561 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0807 19:04:47.197870   62561 command_runner.go:130] > # imagestore = ""
	I0807 19:04:47.197883   62561 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0807 19:04:47.197896   62561 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0807 19:04:47.197906   62561 command_runner.go:130] > storage_driver = "overlay"
	I0807 19:04:47.197916   62561 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0807 19:04:47.197970   62561 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0807 19:04:47.197983   62561 command_runner.go:130] > storage_option = [
	I0807 19:04:47.198017   62561 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0807 19:04:47.198028   62561 command_runner.go:130] > ]
	I0807 19:04:47.198038   62561 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0807 19:04:47.198051   62561 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0807 19:04:47.198075   62561 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0807 19:04:47.198087   62561 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0807 19:04:47.198097   62561 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0807 19:04:47.198108   62561 command_runner.go:130] > # always happen on a node reboot
	I0807 19:04:47.198115   62561 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0807 19:04:47.198135   62561 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0807 19:04:47.198148   62561 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0807 19:04:47.198167   62561 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0807 19:04:47.198179   62561 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0807 19:04:47.198194   62561 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0807 19:04:47.198210   62561 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0807 19:04:47.198219   62561 command_runner.go:130] > # internal_wipe = true
	I0807 19:04:47.198235   62561 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0807 19:04:47.198247   62561 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0807 19:04:47.198257   62561 command_runner.go:130] > # internal_repair = false
	I0807 19:04:47.198268   62561 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0807 19:04:47.198280   62561 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0807 19:04:47.198293   62561 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0807 19:04:47.198305   62561 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0807 19:04:47.198319   62561 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0807 19:04:47.198328   62561 command_runner.go:130] > [crio.api]
	I0807 19:04:47.198339   62561 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0807 19:04:47.198348   62561 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0807 19:04:47.198356   62561 command_runner.go:130] > # IP address on which the stream server will listen.
	I0807 19:04:47.198368   62561 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0807 19:04:47.198381   62561 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0807 19:04:47.198388   62561 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0807 19:04:47.198397   62561 command_runner.go:130] > # stream_port = "0"
	I0807 19:04:47.198408   62561 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0807 19:04:47.198418   62561 command_runner.go:130] > # stream_enable_tls = false
	I0807 19:04:47.198427   62561 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0807 19:04:47.198439   62561 command_runner.go:130] > # stream_idle_timeout = ""
	I0807 19:04:47.198452   62561 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0807 19:04:47.198461   62561 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0807 19:04:47.198470   62561 command_runner.go:130] > # minutes.
	I0807 19:04:47.198476   62561 command_runner.go:130] > # stream_tls_cert = ""
	I0807 19:04:47.198496   62561 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0807 19:04:47.198508   62561 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0807 19:04:47.198518   62561 command_runner.go:130] > # stream_tls_key = ""
	I0807 19:04:47.198527   62561 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0807 19:04:47.198541   62561 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0807 19:04:47.198566   62561 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0807 19:04:47.198575   62561 command_runner.go:130] > # stream_tls_ca = ""
	I0807 19:04:47.198587   62561 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0807 19:04:47.198597   62561 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0807 19:04:47.198608   62561 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0807 19:04:47.198618   62561 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0807 19:04:47.198627   62561 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0807 19:04:47.198639   62561 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0807 19:04:47.198648   62561 command_runner.go:130] > [crio.runtime]
	I0807 19:04:47.198657   62561 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0807 19:04:47.198667   62561 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0807 19:04:47.198674   62561 command_runner.go:130] > # "nofile=1024:2048"
	I0807 19:04:47.198684   62561 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0807 19:04:47.198693   62561 command_runner.go:130] > # default_ulimits = [
	I0807 19:04:47.198698   62561 command_runner.go:130] > # ]
	I0807 19:04:47.198710   62561 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0807 19:04:47.198719   62561 command_runner.go:130] > # no_pivot = false
	I0807 19:04:47.198728   62561 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0807 19:04:47.198740   62561 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0807 19:04:47.198751   62561 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0807 19:04:47.198762   62561 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0807 19:04:47.198773   62561 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0807 19:04:47.198785   62561 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0807 19:04:47.198796   62561 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0807 19:04:47.198806   62561 command_runner.go:130] > # Cgroup setting for conmon
	I0807 19:04:47.198817   62561 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0807 19:04:47.198827   62561 command_runner.go:130] > conmon_cgroup = "pod"
	I0807 19:04:47.198836   62561 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0807 19:04:47.198847   62561 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0807 19:04:47.198857   62561 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0807 19:04:47.198867   62561 command_runner.go:130] > conmon_env = [
	I0807 19:04:47.198884   62561 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0807 19:04:47.198894   62561 command_runner.go:130] > ]
	I0807 19:04:47.198904   62561 command_runner.go:130] > # Additional environment variables to set for all the
	I0807 19:04:47.198916   62561 command_runner.go:130] > # containers. These are overridden if set in the
	I0807 19:04:47.198929   62561 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0807 19:04:47.198938   62561 command_runner.go:130] > # default_env = [
	I0807 19:04:47.198943   62561 command_runner.go:130] > # ]
	I0807 19:04:47.198955   62561 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0807 19:04:47.198969   62561 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0807 19:04:47.198980   62561 command_runner.go:130] > # selinux = false
	I0807 19:04:47.198994   62561 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0807 19:04:47.199007   62561 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0807 19:04:47.199018   62561 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0807 19:04:47.199028   62561 command_runner.go:130] > # seccomp_profile = ""
	I0807 19:04:47.199038   62561 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0807 19:04:47.199050   62561 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0807 19:04:47.199062   62561 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0807 19:04:47.199072   62561 command_runner.go:130] > # which might increase security.
	I0807 19:04:47.199081   62561 command_runner.go:130] > # This option is currently deprecated,
	I0807 19:04:47.199091   62561 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0807 19:04:47.199099   62561 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0807 19:04:47.199109   62561 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0807 19:04:47.199123   62561 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0807 19:04:47.199137   62561 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0807 19:04:47.199151   62561 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0807 19:04:47.199164   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.199175   62561 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0807 19:04:47.199185   62561 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0807 19:04:47.199196   62561 command_runner.go:130] > # the cgroup blockio controller.
	I0807 19:04:47.199202   62561 command_runner.go:130] > # blockio_config_file = ""
	I0807 19:04:47.199215   62561 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0807 19:04:47.199224   62561 command_runner.go:130] > # blockio parameters.
	I0807 19:04:47.199231   62561 command_runner.go:130] > # blockio_reload = false
	I0807 19:04:47.199244   62561 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0807 19:04:47.199253   62561 command_runner.go:130] > # irqbalance daemon.
	I0807 19:04:47.199262   62561 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0807 19:04:47.199282   62561 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0807 19:04:47.199297   62561 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0807 19:04:47.199311   62561 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0807 19:04:47.199325   62561 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0807 19:04:47.199337   62561 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0807 19:04:47.199344   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.199350   62561 command_runner.go:130] > # rdt_config_file = ""
	I0807 19:04:47.199359   62561 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0807 19:04:47.199369   62561 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0807 19:04:47.199411   62561 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0807 19:04:47.199423   62561 command_runner.go:130] > # separate_pull_cgroup = ""
	I0807 19:04:47.199433   62561 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0807 19:04:47.199446   62561 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0807 19:04:47.199452   62561 command_runner.go:130] > # will be added.
	I0807 19:04:47.199460   62561 command_runner.go:130] > # default_capabilities = [
	I0807 19:04:47.199466   62561 command_runner.go:130] > # 	"CHOWN",
	I0807 19:04:47.199475   62561 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0807 19:04:47.199484   62561 command_runner.go:130] > # 	"FSETID",
	I0807 19:04:47.199492   62561 command_runner.go:130] > # 	"FOWNER",
	I0807 19:04:47.199501   62561 command_runner.go:130] > # 	"SETGID",
	I0807 19:04:47.199510   62561 command_runner.go:130] > # 	"SETUID",
	I0807 19:04:47.199520   62561 command_runner.go:130] > # 	"SETPCAP",
	I0807 19:04:47.199530   62561 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0807 19:04:47.199539   62561 command_runner.go:130] > # 	"KILL",
	I0807 19:04:47.199549   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199564   62561 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0807 19:04:47.199577   62561 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0807 19:04:47.199587   62561 command_runner.go:130] > # add_inheritable_capabilities = false
	I0807 19:04:47.199598   62561 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0807 19:04:47.199610   62561 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0807 19:04:47.199618   62561 command_runner.go:130] > default_sysctls = [
	I0807 19:04:47.199626   62561 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0807 19:04:47.199634   62561 command_runner.go:130] > ]
	I0807 19:04:47.199642   62561 command_runner.go:130] > # List of devices on the host that a
	I0807 19:04:47.199652   62561 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0807 19:04:47.199659   62561 command_runner.go:130] > # allowed_devices = [
	I0807 19:04:47.199672   62561 command_runner.go:130] > # 	"/dev/fuse",
	I0807 19:04:47.199678   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199685   62561 command_runner.go:130] > # List of additional devices. specified as
	I0807 19:04:47.199696   62561 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0807 19:04:47.199705   62561 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0807 19:04:47.199713   62561 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0807 19:04:47.199723   62561 command_runner.go:130] > # additional_devices = [
	I0807 19:04:47.199728   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199737   62561 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0807 19:04:47.199743   62561 command_runner.go:130] > # cdi_spec_dirs = [
	I0807 19:04:47.199750   62561 command_runner.go:130] > # 	"/etc/cdi",
	I0807 19:04:47.199756   62561 command_runner.go:130] > # 	"/var/run/cdi",
	I0807 19:04:47.199763   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199772   62561 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0807 19:04:47.199784   62561 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0807 19:04:47.199790   62561 command_runner.go:130] > # Defaults to false.
	I0807 19:04:47.199803   62561 command_runner.go:130] > # device_ownership_from_security_context = false
	I0807 19:04:47.199816   62561 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0807 19:04:47.199829   62561 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0807 19:04:47.199838   62561 command_runner.go:130] > # hooks_dir = [
	I0807 19:04:47.199846   62561 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0807 19:04:47.199854   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199863   62561 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0807 19:04:47.199876   62561 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0807 19:04:47.199888   62561 command_runner.go:130] > # its default mounts from the following two files:
	I0807 19:04:47.199894   62561 command_runner.go:130] > #
	I0807 19:04:47.199904   62561 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0807 19:04:47.199918   62561 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0807 19:04:47.199927   62561 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0807 19:04:47.199934   62561 command_runner.go:130] > #
	I0807 19:04:47.199943   62561 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0807 19:04:47.199956   62561 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0807 19:04:47.199968   62561 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0807 19:04:47.199978   62561 command_runner.go:130] > #      only add mounts it finds in this file.
	I0807 19:04:47.199983   62561 command_runner.go:130] > #
	I0807 19:04:47.199990   62561 command_runner.go:130] > # default_mounts_file = ""
	I0807 19:04:47.200006   62561 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0807 19:04:47.200022   62561 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0807 19:04:47.200028   62561 command_runner.go:130] > pids_limit = 1024
	I0807 19:04:47.200038   62561 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0807 19:04:47.200049   62561 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0807 19:04:47.200063   62561 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0807 19:04:47.200079   62561 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0807 19:04:47.200089   62561 command_runner.go:130] > # log_size_max = -1
	I0807 19:04:47.200104   62561 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0807 19:04:47.200115   62561 command_runner.go:130] > # log_to_journald = false
	I0807 19:04:47.200128   62561 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0807 19:04:47.200139   62561 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0807 19:04:47.200151   62561 command_runner.go:130] > # Path to directory for container attach sockets.
	I0807 19:04:47.200167   62561 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0807 19:04:47.200179   62561 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0807 19:04:47.200189   62561 command_runner.go:130] > # bind_mount_prefix = ""
	I0807 19:04:47.200217   62561 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0807 19:04:47.200230   62561 command_runner.go:130] > # read_only = false
	I0807 19:04:47.200242   62561 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0807 19:04:47.200261   62561 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0807 19:04:47.200270   62561 command_runner.go:130] > # live configuration reload.
	I0807 19:04:47.200277   62561 command_runner.go:130] > # log_level = "info"
	I0807 19:04:47.200289   62561 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0807 19:04:47.200300   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.200309   62561 command_runner.go:130] > # log_filter = ""
	I0807 19:04:47.200319   62561 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0807 19:04:47.200332   62561 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0807 19:04:47.200340   62561 command_runner.go:130] > # separated by comma.
	I0807 19:04:47.200352   62561 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0807 19:04:47.200363   62561 command_runner.go:130] > # uid_mappings = ""
	I0807 19:04:47.200372   62561 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0807 19:04:47.200381   62561 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0807 19:04:47.200388   62561 command_runner.go:130] > # separated by comma.
	I0807 19:04:47.200399   62561 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0807 19:04:47.200409   62561 command_runner.go:130] > # gid_mappings = ""
	I0807 19:04:47.200419   62561 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0807 19:04:47.200440   62561 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0807 19:04:47.200454   62561 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0807 19:04:47.200468   62561 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0807 19:04:47.200477   62561 command_runner.go:130] > # minimum_mappable_uid = -1
	I0807 19:04:47.200500   62561 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0807 19:04:47.200516   62561 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0807 19:04:47.200529   62561 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0807 19:04:47.200544   62561 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0807 19:04:47.200554   62561 command_runner.go:130] > # minimum_mappable_gid = -1
	I0807 19:04:47.200565   62561 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0807 19:04:47.200578   62561 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0807 19:04:47.200592   62561 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0807 19:04:47.200601   62561 command_runner.go:130] > # ctr_stop_timeout = 30
	I0807 19:04:47.200610   62561 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0807 19:04:47.200622   62561 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0807 19:04:47.200629   62561 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0807 19:04:47.200639   62561 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0807 19:04:47.200645   62561 command_runner.go:130] > drop_infra_ctr = false
	I0807 19:04:47.200657   62561 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0807 19:04:47.200668   62561 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0807 19:04:47.200681   62561 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0807 19:04:47.200690   62561 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0807 19:04:47.200701   62561 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0807 19:04:47.200716   62561 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0807 19:04:47.200729   62561 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0807 19:04:47.200740   62561 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0807 19:04:47.200751   62561 command_runner.go:130] > # shared_cpuset = ""
	I0807 19:04:47.200764   62561 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0807 19:04:47.200776   62561 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0807 19:04:47.200787   62561 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0807 19:04:47.200797   62561 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0807 19:04:47.200807   62561 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0807 19:04:47.200816   62561 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0807 19:04:47.200830   62561 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0807 19:04:47.200838   62561 command_runner.go:130] > # enable_criu_support = false
	I0807 19:04:47.200846   62561 command_runner.go:130] > # Enable/disable the generation of the container,
	I0807 19:04:47.200862   62561 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0807 19:04:47.200872   62561 command_runner.go:130] > # enable_pod_events = false
	I0807 19:04:47.200882   62561 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0807 19:04:47.200898   62561 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0807 19:04:47.200909   62561 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0807 19:04:47.200915   62561 command_runner.go:130] > # default_runtime = "runc"
	I0807 19:04:47.200927   62561 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0807 19:04:47.200942   62561 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0807 19:04:47.200959   62561 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0807 19:04:47.200970   62561 command_runner.go:130] > # creation as a file is not desired either.
	I0807 19:04:47.200983   62561 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0807 19:04:47.200997   62561 command_runner.go:130] > # the hostname is being managed dynamically.
	I0807 19:04:47.201008   62561 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0807 19:04:47.201012   62561 command_runner.go:130] > # ]
	I0807 19:04:47.201022   62561 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0807 19:04:47.201035   62561 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0807 19:04:47.201047   62561 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0807 19:04:47.201058   62561 command_runner.go:130] > # Each entry in the table should follow the format:
	I0807 19:04:47.201063   62561 command_runner.go:130] > #
	I0807 19:04:47.201073   62561 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0807 19:04:47.201082   62561 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0807 19:04:47.201142   62561 command_runner.go:130] > # runtime_type = "oci"
	I0807 19:04:47.201162   62561 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0807 19:04:47.201173   62561 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0807 19:04:47.201183   62561 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0807 19:04:47.201190   62561 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0807 19:04:47.201199   62561 command_runner.go:130] > # monitor_env = []
	I0807 19:04:47.201208   62561 command_runner.go:130] > # privileged_without_host_devices = false
	I0807 19:04:47.201218   62561 command_runner.go:130] > # allowed_annotations = []
	I0807 19:04:47.201226   62561 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0807 19:04:47.201234   62561 command_runner.go:130] > # Where:
	I0807 19:04:47.201242   62561 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0807 19:04:47.201254   62561 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0807 19:04:47.201264   62561 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0807 19:04:47.201274   62561 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0807 19:04:47.201283   62561 command_runner.go:130] > #   in $PATH.
	I0807 19:04:47.201296   62561 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0807 19:04:47.201304   62561 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0807 19:04:47.201311   62561 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0807 19:04:47.201317   62561 command_runner.go:130] > #   state.
	I0807 19:04:47.201326   62561 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0807 19:04:47.201337   62561 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0807 19:04:47.201350   62561 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0807 19:04:47.201361   62561 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0807 19:04:47.201374   62561 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0807 19:04:47.201388   62561 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0807 19:04:47.201399   62561 command_runner.go:130] > #   The currently recognized values are:
	I0807 19:04:47.201412   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0807 19:04:47.201426   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0807 19:04:47.201434   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0807 19:04:47.201443   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0807 19:04:47.201457   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0807 19:04:47.201467   62561 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0807 19:04:47.201481   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0807 19:04:47.201494   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0807 19:04:47.201505   62561 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0807 19:04:47.201517   62561 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0807 19:04:47.201527   62561 command_runner.go:130] > #   deprecated option "conmon".
	I0807 19:04:47.201539   62561 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0807 19:04:47.201550   62561 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0807 19:04:47.201559   62561 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0807 19:04:47.201566   62561 command_runner.go:130] > #   should be moved to the container's cgroup
	I0807 19:04:47.201572   62561 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0807 19:04:47.201579   62561 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0807 19:04:47.201585   62561 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0807 19:04:47.201592   62561 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0807 19:04:47.201595   62561 command_runner.go:130] > #
	I0807 19:04:47.201600   62561 command_runner.go:130] > # Using the seccomp notifier feature:
	I0807 19:04:47.201605   62561 command_runner.go:130] > #
	I0807 19:04:47.201611   62561 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0807 19:04:47.201619   62561 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0807 19:04:47.201624   62561 command_runner.go:130] > #
	I0807 19:04:47.201636   62561 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0807 19:04:47.201646   62561 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0807 19:04:47.201651   62561 command_runner.go:130] > #
	I0807 19:04:47.201657   62561 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0807 19:04:47.201664   62561 command_runner.go:130] > # feature.
	I0807 19:04:47.201667   62561 command_runner.go:130] > #
	I0807 19:04:47.201675   62561 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0807 19:04:47.201684   62561 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0807 19:04:47.201691   62561 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0807 19:04:47.201699   62561 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0807 19:04:47.201704   62561 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0807 19:04:47.201710   62561 command_runner.go:130] > #
	I0807 19:04:47.201717   62561 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0807 19:04:47.201725   62561 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0807 19:04:47.201732   62561 command_runner.go:130] > #
	I0807 19:04:47.201738   62561 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0807 19:04:47.201746   62561 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0807 19:04:47.201750   62561 command_runner.go:130] > #
	I0807 19:04:47.201756   62561 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0807 19:04:47.201763   62561 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0807 19:04:47.201769   62561 command_runner.go:130] > # limitation.
	I0807 19:04:47.201774   62561 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0807 19:04:47.201780   62561 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0807 19:04:47.201784   62561 command_runner.go:130] > runtime_type = "oci"
	I0807 19:04:47.201791   62561 command_runner.go:130] > runtime_root = "/run/runc"
	I0807 19:04:47.201795   62561 command_runner.go:130] > runtime_config_path = ""
	I0807 19:04:47.201801   62561 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0807 19:04:47.201807   62561 command_runner.go:130] > monitor_cgroup = "pod"
	I0807 19:04:47.201811   62561 command_runner.go:130] > monitor_exec_cgroup = ""
	I0807 19:04:47.201815   62561 command_runner.go:130] > monitor_env = [
	I0807 19:04:47.201823   62561 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0807 19:04:47.201828   62561 command_runner.go:130] > ]
	I0807 19:04:47.201833   62561 command_runner.go:130] > privileged_without_host_devices = false
	I0807 19:04:47.201841   62561 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0807 19:04:47.201846   62561 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0807 19:04:47.201854   62561 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0807 19:04:47.201868   62561 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0807 19:04:47.201877   62561 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0807 19:04:47.201885   62561 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0807 19:04:47.201897   62561 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0807 19:04:47.201904   62561 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0807 19:04:47.201913   62561 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0807 19:04:47.201922   62561 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0807 19:04:47.201928   62561 command_runner.go:130] > # Example:
	I0807 19:04:47.201932   62561 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0807 19:04:47.201937   62561 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0807 19:04:47.201941   62561 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0807 19:04:47.201946   62561 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0807 19:04:47.201949   62561 command_runner.go:130] > # cpuset = 0
	I0807 19:04:47.201952   62561 command_runner.go:130] > # cpushares = "0-1"
	I0807 19:04:47.201956   62561 command_runner.go:130] > # Where:
	I0807 19:04:47.201960   62561 command_runner.go:130] > # The workload name is workload-type.
	I0807 19:04:47.201966   62561 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0807 19:04:47.201971   62561 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0807 19:04:47.201976   62561 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0807 19:04:47.201983   62561 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0807 19:04:47.201988   62561 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0807 19:04:47.201993   62561 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0807 19:04:47.201998   62561 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0807 19:04:47.202002   62561 command_runner.go:130] > # Default value is set to true
	I0807 19:04:47.202009   62561 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0807 19:04:47.202014   62561 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0807 19:04:47.202018   62561 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0807 19:04:47.202022   62561 command_runner.go:130] > # Default value is set to 'false'
	I0807 19:04:47.202026   62561 command_runner.go:130] > # disable_hostport_mapping = false
	I0807 19:04:47.202031   62561 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0807 19:04:47.202034   62561 command_runner.go:130] > #
	I0807 19:04:47.202040   62561 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0807 19:04:47.202045   62561 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0807 19:04:47.202051   62561 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0807 19:04:47.202057   62561 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0807 19:04:47.202061   62561 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0807 19:04:47.202069   62561 command_runner.go:130] > [crio.image]
	I0807 19:04:47.202075   62561 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0807 19:04:47.202078   62561 command_runner.go:130] > # default_transport = "docker://"
	I0807 19:04:47.202083   62561 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0807 19:04:47.202089   62561 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0807 19:04:47.202093   62561 command_runner.go:130] > # global_auth_file = ""
	I0807 19:04:47.202097   62561 command_runner.go:130] > # The image used to instantiate infra containers.
	I0807 19:04:47.202101   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.202105   62561 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0807 19:04:47.202111   62561 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0807 19:04:47.202119   62561 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0807 19:04:47.202126   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.202132   62561 command_runner.go:130] > # pause_image_auth_file = ""
	I0807 19:04:47.202138   62561 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0807 19:04:47.202146   62561 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0807 19:04:47.202162   62561 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0807 19:04:47.202169   62561 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0807 19:04:47.202174   62561 command_runner.go:130] > # pause_command = "/pause"
	I0807 19:04:47.202181   62561 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0807 19:04:47.202189   62561 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0807 19:04:47.202194   62561 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0807 19:04:47.202202   62561 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0807 19:04:47.202208   62561 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0807 19:04:47.202216   62561 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0807 19:04:47.202220   62561 command_runner.go:130] > # pinned_images = [
	I0807 19:04:47.202226   62561 command_runner.go:130] > # ]
	I0807 19:04:47.202231   62561 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0807 19:04:47.202239   62561 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0807 19:04:47.202246   62561 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0807 19:04:47.202253   62561 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0807 19:04:47.202261   62561 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0807 19:04:47.202265   62561 command_runner.go:130] > # signature_policy = ""
	I0807 19:04:47.202272   62561 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0807 19:04:47.202278   62561 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0807 19:04:47.202286   62561 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0807 19:04:47.202292   62561 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0807 19:04:47.202306   62561 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0807 19:04:47.202313   62561 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0807 19:04:47.202319   62561 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0807 19:04:47.202327   62561 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0807 19:04:47.202333   62561 command_runner.go:130] > # changing them here.
	I0807 19:04:47.202337   62561 command_runner.go:130] > # insecure_registries = [
	I0807 19:04:47.202342   62561 command_runner.go:130] > # ]
	I0807 19:04:47.202347   62561 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0807 19:04:47.202354   62561 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0807 19:04:47.202359   62561 command_runner.go:130] > # image_volumes = "mkdir"
	I0807 19:04:47.202366   62561 command_runner.go:130] > # Temporary directory to use for storing big files
	I0807 19:04:47.202370   62561 command_runner.go:130] > # big_files_temporary_dir = ""
	I0807 19:04:47.202376   62561 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0807 19:04:47.202382   62561 command_runner.go:130] > # CNI plugins.
	I0807 19:04:47.202386   62561 command_runner.go:130] > [crio.network]
	I0807 19:04:47.202393   62561 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0807 19:04:47.202400   62561 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0807 19:04:47.202406   62561 command_runner.go:130] > # cni_default_network = ""
	I0807 19:04:47.202412   62561 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0807 19:04:47.202418   62561 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0807 19:04:47.202423   62561 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0807 19:04:47.202429   62561 command_runner.go:130] > # plugin_dirs = [
	I0807 19:04:47.202433   62561 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0807 19:04:47.202439   62561 command_runner.go:130] > # ]
	I0807 19:04:47.202444   62561 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0807 19:04:47.202450   62561 command_runner.go:130] > [crio.metrics]
	I0807 19:04:47.202457   62561 command_runner.go:130] > # Globally enable or disable metrics support.
	I0807 19:04:47.202466   62561 command_runner.go:130] > enable_metrics = true
	I0807 19:04:47.202475   62561 command_runner.go:130] > # Specify enabled metrics collectors.
	I0807 19:04:47.202485   62561 command_runner.go:130] > # Per default all metrics are enabled.
	I0807 19:04:47.202497   62561 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0807 19:04:47.202509   62561 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0807 19:04:47.202520   62561 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0807 19:04:47.202529   62561 command_runner.go:130] > # metrics_collectors = [
	I0807 19:04:47.202535   62561 command_runner.go:130] > # 	"operations",
	I0807 19:04:47.202545   62561 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0807 19:04:47.202560   62561 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0807 19:04:47.202567   62561 command_runner.go:130] > # 	"operations_errors",
	I0807 19:04:47.202572   62561 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0807 19:04:47.202578   62561 command_runner.go:130] > # 	"image_pulls_by_name",
	I0807 19:04:47.202582   62561 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0807 19:04:47.202589   62561 command_runner.go:130] > # 	"image_pulls_failures",
	I0807 19:04:47.202593   62561 command_runner.go:130] > # 	"image_pulls_successes",
	I0807 19:04:47.202599   62561 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0807 19:04:47.202603   62561 command_runner.go:130] > # 	"image_layer_reuse",
	I0807 19:04:47.202610   62561 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0807 19:04:47.202614   62561 command_runner.go:130] > # 	"containers_oom_total",
	I0807 19:04:47.202620   62561 command_runner.go:130] > # 	"containers_oom",
	I0807 19:04:47.202624   62561 command_runner.go:130] > # 	"processes_defunct",
	I0807 19:04:47.202630   62561 command_runner.go:130] > # 	"operations_total",
	I0807 19:04:47.202634   62561 command_runner.go:130] > # 	"operations_latency_seconds",
	I0807 19:04:47.202641   62561 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0807 19:04:47.202645   62561 command_runner.go:130] > # 	"operations_errors_total",
	I0807 19:04:47.202651   62561 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0807 19:04:47.202655   62561 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0807 19:04:47.202662   62561 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0807 19:04:47.202666   62561 command_runner.go:130] > # 	"image_pulls_success_total",
	I0807 19:04:47.202672   62561 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0807 19:04:47.202676   62561 command_runner.go:130] > # 	"containers_oom_count_total",
	I0807 19:04:47.202687   62561 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0807 19:04:47.202691   62561 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0807 19:04:47.202697   62561 command_runner.go:130] > # ]
	I0807 19:04:47.202702   62561 command_runner.go:130] > # The port on which the metrics server will listen.
	I0807 19:04:47.202709   62561 command_runner.go:130] > # metrics_port = 9090
	I0807 19:04:47.202713   62561 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0807 19:04:47.202719   62561 command_runner.go:130] > # metrics_socket = ""
	I0807 19:04:47.202724   62561 command_runner.go:130] > # The certificate for the secure metrics server.
	I0807 19:04:47.202732   62561 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0807 19:04:47.202740   62561 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0807 19:04:47.202744   62561 command_runner.go:130] > # certificate on any modification event.
	I0807 19:04:47.202751   62561 command_runner.go:130] > # metrics_cert = ""
	I0807 19:04:47.202757   62561 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0807 19:04:47.202769   62561 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0807 19:04:47.202775   62561 command_runner.go:130] > # metrics_key = ""
	I0807 19:04:47.202781   62561 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0807 19:04:47.202788   62561 command_runner.go:130] > [crio.tracing]
	I0807 19:04:47.202793   62561 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0807 19:04:47.202799   62561 command_runner.go:130] > # enable_tracing = false
	I0807 19:04:47.202804   62561 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0807 19:04:47.202811   62561 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0807 19:04:47.202817   62561 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0807 19:04:47.202824   62561 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0807 19:04:47.202828   62561 command_runner.go:130] > # CRI-O NRI configuration.
	I0807 19:04:47.202834   62561 command_runner.go:130] > [crio.nri]
	I0807 19:04:47.202838   62561 command_runner.go:130] > # Globally enable or disable NRI.
	I0807 19:04:47.202844   62561 command_runner.go:130] > # enable_nri = false
	I0807 19:04:47.202848   62561 command_runner.go:130] > # NRI socket to listen on.
	I0807 19:04:47.202853   62561 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0807 19:04:47.202857   62561 command_runner.go:130] > # NRI plugin directory to use.
	I0807 19:04:47.202864   62561 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0807 19:04:47.202869   62561 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0807 19:04:47.202875   62561 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0807 19:04:47.202880   62561 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0807 19:04:47.202886   62561 command_runner.go:130] > # nri_disable_connections = false
	I0807 19:04:47.202891   62561 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0807 19:04:47.202896   62561 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0807 19:04:47.202902   62561 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0807 19:04:47.202909   62561 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0807 19:04:47.202920   62561 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0807 19:04:47.202928   62561 command_runner.go:130] > [crio.stats]
	I0807 19:04:47.202937   62561 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0807 19:04:47.202947   62561 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0807 19:04:47.203034   62561 command_runner.go:130] > # stats_collection_period = 0
	I0807 19:04:47.203079   62561 command_runner.go:130] ! time="2024-08-07 19:04:47.159931880Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0807 19:04:47.203102   62561 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0807 19:04:47.203228   62561 cni.go:84] Creating CNI manager for ""
	I0807 19:04:47.203239   62561 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 19:04:47.203251   62561 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:04:47.203278   62561 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-334028 NodeName:multinode-334028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 19:04:47.203445   62561 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-334028"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:04:47.203517   62561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 19:04:47.215299   62561 command_runner.go:130] > kubeadm
	I0807 19:04:47.215317   62561 command_runner.go:130] > kubectl
	I0807 19:04:47.215324   62561 command_runner.go:130] > kubelet
	I0807 19:04:47.215345   62561 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:04:47.215393   62561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:04:47.226675   62561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0807 19:04:47.246302   62561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:04:47.265984   62561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0807 19:04:47.283769   62561 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I0807 19:04:47.287840   62561 command_runner.go:130] > 192.168.39.165	control-plane.minikube.internal
	I0807 19:04:47.287945   62561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:04:47.431245   62561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:04:47.446625   62561 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028 for IP: 192.168.39.165
	I0807 19:04:47.446646   62561 certs.go:194] generating shared ca certs ...
	I0807 19:04:47.446673   62561 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:04:47.446833   62561 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 19:04:47.446870   62561 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 19:04:47.446879   62561 certs.go:256] generating profile certs ...
	I0807 19:04:47.446952   62561 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/client.key
	I0807 19:04:47.447015   62561 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.key.dfb147c6
	I0807 19:04:47.447051   62561 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.key
	I0807 19:04:47.447062   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 19:04:47.447076   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 19:04:47.447092   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 19:04:47.447105   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 19:04:47.447117   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 19:04:47.447131   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 19:04:47.447143   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 19:04:47.447156   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 19:04:47.447210   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 19:04:47.447236   62561 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 19:04:47.447245   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 19:04:47.447267   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:04:47.447289   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:04:47.447313   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 19:04:47.447349   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:04:47.447379   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.447392   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.447405   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.447970   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:04:47.473336   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:04:47.497296   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:04:47.520492   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 19:04:47.544634   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 19:04:47.568080   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 19:04:47.592467   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:04:47.615695   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 19:04:47.638317   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:04:47.662548   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 19:04:47.685553   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 19:04:47.708890   62561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:04:47.729771   62561 ssh_runner.go:195] Run: openssl version
	I0807 19:04:47.738103   62561 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0807 19:04:47.738178   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:04:47.761732   62561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.767669   62561 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.767875   62561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.767928   62561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.777615   62561 command_runner.go:130] > b5213941
	I0807 19:04:47.777940   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:04:47.798281   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 19:04:47.842552   62561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.849385   62561 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.849619   62561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.849710   62561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.857418   62561 command_runner.go:130] > 51391683
	I0807 19:04:47.857764   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 19:04:47.879364   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 19:04:47.895522   62561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.900920   62561 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.901230   62561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.901283   62561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.907038   62561 command_runner.go:130] > 3ec20f2e
	I0807 19:04:47.907096   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:04:47.917943   62561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:04:47.927511   62561 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:04:47.927535   62561 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0807 19:04:47.927544   62561 command_runner.go:130] > Device: 253,1	Inode: 2103851     Links: 1
	I0807 19:04:47.927554   62561 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 19:04:47.927563   62561 command_runner.go:130] > Access: 2024-08-07 18:57:46.498707647 +0000
	I0807 19:04:47.927571   62561 command_runner.go:130] > Modify: 2024-08-07 18:57:46.498707647 +0000
	I0807 19:04:47.927577   62561 command_runner.go:130] > Change: 2024-08-07 18:57:46.498707647 +0000
	I0807 19:04:47.927582   62561 command_runner.go:130] >  Birth: 2024-08-07 18:57:46.498707647 +0000
	I0807 19:04:47.927638   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 19:04:47.933367   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.933546   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 19:04:47.939988   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.940102   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 19:04:47.945468   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.945792   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 19:04:47.951364   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.951743   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 19:04:47.963070   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.963213   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 19:04:47.973228   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.975172   62561 kubeadm.go:392] StartCluster: {Name:multinode-334028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-334028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.119 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:04:47.975281   62561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 19:04:47.975368   62561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:04:48.024401   62561 command_runner.go:130] > ce4db7b426abe87403fa89c3fd94af24bdf03aa9c79808989468d4bd13c2a7bc
	I0807 19:04:48.024441   62561 command_runner.go:130] > a84394919dc587c6edc597087ad59d26dc822c41709e10ce4f6c1e487fe223e4
	I0807 19:04:48.024453   62561 command_runner.go:130] > 9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d
	I0807 19:04:48.024461   62561 command_runner.go:130] > 2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b
	I0807 19:04:48.024469   62561 command_runner.go:130] > 6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79
	I0807 19:04:48.024477   62561 command_runner.go:130] > ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88
	I0807 19:04:48.024487   62561 command_runner.go:130] > cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a
	I0807 19:04:48.024504   62561 command_runner.go:130] > da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072
	I0807 19:04:48.024531   62561 cri.go:89] found id: "ce4db7b426abe87403fa89c3fd94af24bdf03aa9c79808989468d4bd13c2a7bc"
	I0807 19:04:48.024545   62561 cri.go:89] found id: "a84394919dc587c6edc597087ad59d26dc822c41709e10ce4f6c1e487fe223e4"
	I0807 19:04:48.024551   62561 cri.go:89] found id: "9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d"
	I0807 19:04:48.024559   62561 cri.go:89] found id: "2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b"
	I0807 19:04:48.024569   62561 cri.go:89] found id: "6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79"
	I0807 19:04:48.024584   62561 cri.go:89] found id: "ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88"
	I0807 19:04:48.024596   62561 cri.go:89] found id: "cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a"
	I0807 19:04:48.024602   62561 cri.go:89] found id: "da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072"
	I0807 19:04:48.024609   62561 cri.go:89] found id: ""
	I0807 19:04:48.024662   62561 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.646762878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723057599646741467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6a1e3e4-9776-41ed-8dae-f725bff9de2b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.647353217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b47f02e-89e5-4f0c-a300-e3644494ab0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.647405664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b47f02e-89e5-4f0c-a300-e3644494ab0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.647933866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec28cb619c0f11474d5c737ac8d59e80fd74eb9d1f170c55e198ccb31c8e6dd4,PodSandboxId:ee248a82a815e2529220d4353b7b01dd2cac6cc0f8c795df27fbf4f8f4613dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723057526737176862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723057501048373564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef668e9a68b275e5750bfb506e86936f065f112ce146c7fba5c1a4d3abfc5b,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723057501027850906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815
e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2,PodSandboxId:351d8ec6860adcea67c5dec40ec1b3411bc31e02f94dbb0e88ab99cdc3c348f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723057493430233564,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},An
notations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906,PodSandboxId:3c1de91fb727de3ce09d2044755dc707115348edfa7c3390f8a9701028e54da4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723057493277196698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb27c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 6e364a1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8,PodSandboxId:f280116a6f48237c8d805cef00a1416669120c1971e46bd5e7e6629ed3c0b619,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723057493239808034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823,PodSandboxId:8ecd971a019aef84780fb101395aa787328d7fd9d579aa15ced6ae19fa178c75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723057493213662449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723057493155383760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef,PodSandboxId:3446b0b9fcd3086a06804406d19e49f5c3edae56e7d5286aded4e41c0d02e2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723057493119128629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da,PodSandboxId:1e0b756c4036d303eb26b561c93c864e2b587688f92f3c18ed396698d68d7a82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723057493111783614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723057487928242796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70642e6a4a0e3d3bb4c6c8ba0524c80afd941db7d785cbdab5d76a67e5973fb4,PodSandboxId:3bcd9b98a301476a52c16754cbdd97be02c30e93c65c9e571d97fd013fdd5eee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723057164229082442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.
pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d,PodSandboxId:75585ea11a7b4e29d40d04142581a3b3aa8dd82b920ff009295e19a4e89aa320,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723057093620547600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},Annotations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b,PodSandboxId:39903e5997b32339af4402248ac0563dce6772113a5e3d1afbe31d4bede2d089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723057091143851798,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88,PodSandboxId:62d19a8b6aa97a047c6466d44dc3b32dac61b1650c711ae60bb79381f59477a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723057070480292031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a,PodSandboxId:dbac8324051a45017d4484dba1af98fadaaf5cae6bb03a1cea0716cdd3572257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723057070449024510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb2
7c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.container.hash: 6e364a1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79,PodSandboxId:ed9e2d85fd55e658a19020434445939e6bd072299b893f1cf64e606f108b60ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723057070486119823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072,PodSandboxId:3eebdfe2361ee914736bca18fd7dc45373dbc9087b280c1ebabbb55037a08818,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723057070435864290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b47f02e-89e5-4f0c-a300-e3644494ab0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.693187781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d9266bd-4f47-4914-b5fe-0e84b1137926 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.693268323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d9266bd-4f47-4914-b5fe-0e84b1137926 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.695327248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd23ebd5-1c51-450b-853d-9398f9c3409b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.695988931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723057599695921820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd23ebd5-1c51-450b-853d-9398f9c3409b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.696577703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d7011c4-846b-4107-a537-f807f5a79319 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.696656038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d7011c4-846b-4107-a537-f807f5a79319 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.697151887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec28cb619c0f11474d5c737ac8d59e80fd74eb9d1f170c55e198ccb31c8e6dd4,PodSandboxId:ee248a82a815e2529220d4353b7b01dd2cac6cc0f8c795df27fbf4f8f4613dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723057526737176862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723057501048373564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef668e9a68b275e5750bfb506e86936f065f112ce146c7fba5c1a4d3abfc5b,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723057501027850906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815
e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2,PodSandboxId:351d8ec6860adcea67c5dec40ec1b3411bc31e02f94dbb0e88ab99cdc3c348f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723057493430233564,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},An
notations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906,PodSandboxId:3c1de91fb727de3ce09d2044755dc707115348edfa7c3390f8a9701028e54da4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723057493277196698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb27c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 6e364a1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8,PodSandboxId:f280116a6f48237c8d805cef00a1416669120c1971e46bd5e7e6629ed3c0b619,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723057493239808034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823,PodSandboxId:8ecd971a019aef84780fb101395aa787328d7fd9d579aa15ced6ae19fa178c75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723057493213662449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723057493155383760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef,PodSandboxId:3446b0b9fcd3086a06804406d19e49f5c3edae56e7d5286aded4e41c0d02e2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723057493119128629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da,PodSandboxId:1e0b756c4036d303eb26b561c93c864e2b587688f92f3c18ed396698d68d7a82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723057493111783614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723057487928242796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70642e6a4a0e3d3bb4c6c8ba0524c80afd941db7d785cbdab5d76a67e5973fb4,PodSandboxId:3bcd9b98a301476a52c16754cbdd97be02c30e93c65c9e571d97fd013fdd5eee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723057164229082442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.
pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d,PodSandboxId:75585ea11a7b4e29d40d04142581a3b3aa8dd82b920ff009295e19a4e89aa320,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723057093620547600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},Annotations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b,PodSandboxId:39903e5997b32339af4402248ac0563dce6772113a5e3d1afbe31d4bede2d089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723057091143851798,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88,PodSandboxId:62d19a8b6aa97a047c6466d44dc3b32dac61b1650c711ae60bb79381f59477a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723057070480292031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a,PodSandboxId:dbac8324051a45017d4484dba1af98fadaaf5cae6bb03a1cea0716cdd3572257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723057070449024510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb2
7c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.container.hash: 6e364a1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79,PodSandboxId:ed9e2d85fd55e658a19020434445939e6bd072299b893f1cf64e606f108b60ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723057070486119823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072,PodSandboxId:3eebdfe2361ee914736bca18fd7dc45373dbc9087b280c1ebabbb55037a08818,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723057070435864290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d7011c4-846b-4107-a537-f807f5a79319 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.750915463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31a7b186-3f66-4814-b79d-4c24241e51ae name=/runtime.v1.RuntimeService/Version
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.751048185Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31a7b186-3f66-4814-b79d-4c24241e51ae name=/runtime.v1.RuntimeService/Version
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.752117388Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=874d0e33-81c3-4223-a553-34eb5fd4c4d5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.752526508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723057599752503674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=874d0e33-81c3-4223-a553-34eb5fd4c4d5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.753093131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=477447d0-b08d-45d0-95a5-2cb321bdab3a name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.753144077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=477447d0-b08d-45d0-95a5-2cb321bdab3a name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.753492044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec28cb619c0f11474d5c737ac8d59e80fd74eb9d1f170c55e198ccb31c8e6dd4,PodSandboxId:ee248a82a815e2529220d4353b7b01dd2cac6cc0f8c795df27fbf4f8f4613dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723057526737176862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723057501048373564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef668e9a68b275e5750bfb506e86936f065f112ce146c7fba5c1a4d3abfc5b,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723057501027850906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815
e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2,PodSandboxId:351d8ec6860adcea67c5dec40ec1b3411bc31e02f94dbb0e88ab99cdc3c348f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723057493430233564,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},An
notations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906,PodSandboxId:3c1de91fb727de3ce09d2044755dc707115348edfa7c3390f8a9701028e54da4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723057493277196698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb27c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 6e364a1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8,PodSandboxId:f280116a6f48237c8d805cef00a1416669120c1971e46bd5e7e6629ed3c0b619,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723057493239808034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823,PodSandboxId:8ecd971a019aef84780fb101395aa787328d7fd9d579aa15ced6ae19fa178c75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723057493213662449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723057493155383760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef,PodSandboxId:3446b0b9fcd3086a06804406d19e49f5c3edae56e7d5286aded4e41c0d02e2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723057493119128629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da,PodSandboxId:1e0b756c4036d303eb26b561c93c864e2b587688f92f3c18ed396698d68d7a82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723057493111783614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723057487928242796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70642e6a4a0e3d3bb4c6c8ba0524c80afd941db7d785cbdab5d76a67e5973fb4,PodSandboxId:3bcd9b98a301476a52c16754cbdd97be02c30e93c65c9e571d97fd013fdd5eee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723057164229082442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.
pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d,PodSandboxId:75585ea11a7b4e29d40d04142581a3b3aa8dd82b920ff009295e19a4e89aa320,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723057093620547600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},Annotations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b,PodSandboxId:39903e5997b32339af4402248ac0563dce6772113a5e3d1afbe31d4bede2d089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723057091143851798,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88,PodSandboxId:62d19a8b6aa97a047c6466d44dc3b32dac61b1650c711ae60bb79381f59477a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723057070480292031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a,PodSandboxId:dbac8324051a45017d4484dba1af98fadaaf5cae6bb03a1cea0716cdd3572257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723057070449024510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb2
7c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.container.hash: 6e364a1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79,PodSandboxId:ed9e2d85fd55e658a19020434445939e6bd072299b893f1cf64e606f108b60ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723057070486119823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072,PodSandboxId:3eebdfe2361ee914736bca18fd7dc45373dbc9087b280c1ebabbb55037a08818,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723057070435864290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=477447d0-b08d-45d0-95a5-2cb321bdab3a name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.802130876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df527821-3242-4535-97a1-fd17db89fece name=/runtime.v1.RuntimeService/Version
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.802367829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df527821-3242-4535-97a1-fd17db89fece name=/runtime.v1.RuntimeService/Version
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.808542134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10ea7cc8-c7af-478f-9383-cc1849b53e91 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.809244775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723057599809221152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10ea7cc8-c7af-478f-9383-cc1849b53e91 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.809849386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c702d3d6-ee4f-4791-8d30-25f06b857336 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.810014509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c702d3d6-ee4f-4791-8d30-25f06b857336 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:06:39 multinode-334028 crio[2908]: time="2024-08-07 19:06:39.810359890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec28cb619c0f11474d5c737ac8d59e80fd74eb9d1f170c55e198ccb31c8e6dd4,PodSandboxId:ee248a82a815e2529220d4353b7b01dd2cac6cc0f8c795df27fbf4f8f4613dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723057526737176862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723057501048373564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef668e9a68b275e5750bfb506e86936f065f112ce146c7fba5c1a4d3abfc5b,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723057501027850906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815
e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2,PodSandboxId:351d8ec6860adcea67c5dec40ec1b3411bc31e02f94dbb0e88ab99cdc3c348f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723057493430233564,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},An
notations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906,PodSandboxId:3c1de91fb727de3ce09d2044755dc707115348edfa7c3390f8a9701028e54da4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723057493277196698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb27c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 6e364a1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8,PodSandboxId:f280116a6f48237c8d805cef00a1416669120c1971e46bd5e7e6629ed3c0b619,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723057493239808034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823,PodSandboxId:8ecd971a019aef84780fb101395aa787328d7fd9d579aa15ced6ae19fa178c75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723057493213662449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723057493155383760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef,PodSandboxId:3446b0b9fcd3086a06804406d19e49f5c3edae56e7d5286aded4e41c0d02e2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723057493119128629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da,PodSandboxId:1e0b756c4036d303eb26b561c93c864e2b587688f92f3c18ed396698d68d7a82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723057493111783614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723057487928242796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70642e6a4a0e3d3bb4c6c8ba0524c80afd941db7d785cbdab5d76a67e5973fb4,PodSandboxId:3bcd9b98a301476a52c16754cbdd97be02c30e93c65c9e571d97fd013fdd5eee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723057164229082442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.
pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d,PodSandboxId:75585ea11a7b4e29d40d04142581a3b3aa8dd82b920ff009295e19a4e89aa320,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723057093620547600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},Annotations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b,PodSandboxId:39903e5997b32339af4402248ac0563dce6772113a5e3d1afbe31d4bede2d089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723057091143851798,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88,PodSandboxId:62d19a8b6aa97a047c6466d44dc3b32dac61b1650c711ae60bb79381f59477a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723057070480292031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a,PodSandboxId:dbac8324051a45017d4484dba1af98fadaaf5cae6bb03a1cea0716cdd3572257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723057070449024510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb2
7c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.container.hash: 6e364a1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79,PodSandboxId:ed9e2d85fd55e658a19020434445939e6bd072299b893f1cf64e606f108b60ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723057070486119823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072,PodSandboxId:3eebdfe2361ee914736bca18fd7dc45373dbc9087b280c1ebabbb55037a08818,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723057070435864290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c702d3d6-ee4f-4791-8d30-25f06b857336 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ec28cb619c0f1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   ee248a82a815e       busybox-fc5497c4f-v64x9
	7ec38ea59ce30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   108e36891126b       coredns-7db6d8ff4d-582vz
	58ef668e9a68b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       2                   8e26a2721be9d       storage-provisioner
	61840be20cf15       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      About a minute ago   Running             kindnet-cni               1                   351d8ec6860ad       kindnet-rwth9
	bceb3a268779b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   3c1de91fb727d       etcd-multinode-334028
	5a6f5ef6794eb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   f280116a6f482       kube-proxy-l8zvz
	c02a1136327b0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   8ecd971a019ae       kube-controller-manager-multinode-334028
	dd95ca599aa17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       1                   8e26a2721be9d       storage-provisioner
	c2a6a484bfabc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   3446b0b9fcd30       kube-apiserver-multinode-334028
	76c8635b399f6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   1e0b756c4036d       kube-scheduler-multinode-334028
	b6712261191c5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   108e36891126b       coredns-7db6d8ff4d-582vz
	70642e6a4a0e3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   3bcd9b98a3014       busybox-fc5497c4f-v64x9
	9e1010d7bf2b3       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    8 minutes ago        Exited              kindnet-cni               0                   75585ea11a7b4       kindnet-rwth9
	2ca940561e18e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   39903e5997b32       kube-proxy-l8zvz
	6da107968aee7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   ed9e2d85fd55e       kube-scheduler-multinode-334028
	ffc63a732f6bf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   62d19a8b6aa97       kube-controller-manager-multinode-334028
	cf1948299290c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   dbac8324051a4       etcd-multinode-334028
	da12cb48b4b16       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   3eebdfe2361ee       kube-apiserver-multinode-334028
	
	
	==> coredns [7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40959 - 43980 "HINFO IN 2210918481587173305.2722027126383920797. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014744373s
	
	
	==> coredns [b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40124 - 32074 "HINFO IN 183290254663183692.2361621144747932340. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021470383s
	
	
	==> describe nodes <==
	Name:               multinode-334028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-334028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T18_57_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:57:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334028
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:06:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:04:59 +0000   Wed, 07 Aug 2024 18:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:04:59 +0000   Wed, 07 Aug 2024 18:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:04:59 +0000   Wed, 07 Aug 2024 18:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:04:59 +0000   Wed, 07 Aug 2024 18:58:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    multinode-334028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 71b24a9feed2442eb9e04eb78076e9c1
	  System UUID:                71b24a9f-eed2-442e-b9e0-4eb78076e9c1
	  Boot ID:                    bf99b756-3ae4-48e4-9741-9e9664912a97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v64x9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 coredns-7db6d8ff4d-582vz                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m31s
	  kube-system                 etcd-multinode-334028                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m46s
	  kube-system                 kindnet-rwth9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m31s
	  kube-system                 kube-apiserver-multinode-334028             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m45s
	  kube-system                 kube-controller-manager-multinode-334028    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m45s
	  kube-system                 kube-proxy-l8zvz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-scheduler-multinode-334028             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m47s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 102s   kube-proxy       
	  Normal   Starting                 8m28s  kube-proxy       
	  Normal   Starting                 8m45s  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m45s  kubelet          Node multinode-334028 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m45s  kubelet          Node multinode-334028 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m45s  kubelet          Node multinode-334028 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m32s  node-controller  Node multinode-334028 event: Registered Node multinode-334028 in Controller
	  Normal   NodeReady                8m15s  kubelet          Node multinode-334028 status is now: NodeReady
	  Warning  ContainerGCFailed        2m45s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 101s   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  101s   kubelet          Node multinode-334028 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    101s   kubelet          Node multinode-334028 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s   kubelet          Node multinode-334028 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  101s   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           90s    node-controller  Node multinode-334028 event: Registered Node multinode-334028 in Controller
	
	
	Name:               multinode-334028-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334028-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-334028
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T19_05_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:05:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334028-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:06:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:06:08 +0000   Wed, 07 Aug 2024 19:05:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:06:08 +0000   Wed, 07 Aug 2024 19:05:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:06:08 +0000   Wed, 07 Aug 2024 19:05:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:06:08 +0000   Wed, 07 Aug 2024 19:05:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    multinode-334028-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5eaf29ba1bed4de0b392b62bc360b7ce
	  System UUID:                5eaf29ba-1bed-4de0-b392-b62bc360b7ce
	  Boot ID:                    76414706-d5c2-47ff-9914-b2ce188f20d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qq6w4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kindnet-rdhb6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m43s
	  kube-system                 kube-proxy-fpwg7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m37s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m43s (x2 over 7m43s)  kubelet     Node multinode-334028-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m43s (x2 over 7m43s)  kubelet     Node multinode-334028-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m43s (x2 over 7m43s)  kubelet     Node multinode-334028-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m43s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m22s                  kubelet     Node multinode-334028-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  63s (x2 over 63s)      kubelet     Node multinode-334028-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x2 over 63s)      kubelet     Node multinode-334028-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x2 over 63s)      kubelet     Node multinode-334028-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  63s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                43s                    kubelet     Node multinode-334028-m02 status is now: NodeReady
	
	
	Name:               multinode-334028-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334028-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-334028
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T19_06_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:06:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334028-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:06:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:06:36 +0000   Wed, 07 Aug 2024 19:06:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:06:36 +0000   Wed, 07 Aug 2024 19:06:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:06:36 +0000   Wed, 07 Aug 2024 19:06:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:06:36 +0000   Wed, 07 Aug 2024 19:06:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    multinode-334028-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d596dc28dcf04aadb8c55c314e32140d
	  System UUID:                d596dc28-dcf0-4aad-b8c5-5c314e32140d
	  Boot ID:                    35dc1d55-34e9-4a1c-b41a-01aa054bd687
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-48b87       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m40s
	  kube-system                 kube-proxy-sgwkv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m36s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m47s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m41s (x2 over 6m41s)  kubelet     Node multinode-334028-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x2 over 6m41s)  kubelet     Node multinode-334028-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x2 over 6m41s)  kubelet     Node multinode-334028-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m21s                  kubelet     Node multinode-334028-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet     Node multinode-334028-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet     Node multinode-334028-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet     Node multinode-334028-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m32s                  kubelet     Node multinode-334028-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  24s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23s (x2 over 24s)      kubelet     Node multinode-334028-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 24s)      kubelet     Node multinode-334028-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 24s)      kubelet     Node multinode-334028-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4s                     kubelet     Node multinode-334028-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058048] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.172648] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.144592] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.276303] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.158300] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.387732] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.061197] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989866] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.076858] kauditd_printk_skb: 69 callbacks suppressed
	[Aug 7 18:58] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.130379] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.486439] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 7 18:59] kauditd_printk_skb: 14 callbacks suppressed
	[Aug 7 19:04] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.154516] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.171748] systemd-fstab-generator[2854]: Ignoring "noauto" option for root device
	[  +0.142717] systemd-fstab-generator[2867]: Ignoring "noauto" option for root device
	[  +0.290406] systemd-fstab-generator[2895]: Ignoring "noauto" option for root device
	[  +1.002178] systemd-fstab-generator[2994]: Ignoring "noauto" option for root device
	[  +5.570393] kauditd_printk_skb: 132 callbacks suppressed
	[  +6.576835] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.095726] kauditd_printk_skb: 64 callbacks suppressed
	[Aug 7 19:05] kauditd_printk_skb: 24 callbacks suppressed
	[  +3.198936] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[ +13.250081] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906] <==
	{"level":"info","ts":"2024-08-07T19:04:54.045359Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:04:54.054452Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:04:54.054503Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:04:54.054512Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:04:54.058524Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-07T19:04:54.074611Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ffc3b7517aaad9f6","initial-advertise-peer-urls":["https://192.168.39.165:2380"],"listen-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T19:04:54.074667Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T19:04:54.074708Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-08-07T19:04:54.074714Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-08-07T19:04:55.794281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-07T19:04:55.794326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-07T19:04:55.794368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgPreVoteResp from ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2024-08-07T19:04:55.794385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became candidate at term 3"}
	{"level":"info","ts":"2024-08-07T19:04:55.794391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgVoteResp from ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-08-07T19:04:55.794399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became leader at term 3"}
	{"level":"info","ts":"2024-08-07T19:04:55.794415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ffc3b7517aaad9f6 elected leader ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-08-07T19:04:55.801141Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ffc3b7517aaad9f6","local-member-attributes":"{Name:multinode-334028 ClientURLs:[https://192.168.39.165:2379]}","request-path":"/0/members/ffc3b7517aaad9f6/attributes","cluster-id":"58f0a6b9f17e1f60","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T19:04:55.801174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:04:55.801173Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:04:55.801451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T19:04:55.801463Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T19:04:55.803179Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2024-08-07T19:04:55.804164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-07T19:06:25.203744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.433661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-334028-m03\" ","response":"range_response_count:1 size:3117"}
	{"level":"info","ts":"2024-08-07T19:06:25.20418Z","caller":"traceutil/trace.go:171","msg":"trace[712435443] range","detail":"{range_begin:/registry/minions/multinode-334028-m03; range_end:; response_count:1; response_revision:1235; }","duration":"158.912325ms","start":"2024-08-07T19:06:25.045212Z","end":"2024-08-07T19:06:25.204124Z","steps":["trace[712435443] 'range keys from in-memory index tree'  (duration: 157.089362ms)"],"step_count":1}
	
	
	==> etcd [cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a] <==
	{"level":"info","ts":"2024-08-07T18:57:51.092601Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:57:51.092637Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:57:51.100998Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T18:57:51.101033Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-07T18:58:57.792232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.910092ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705900378134616216 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:59f6912e34631897>","response":"size:41"}
	{"level":"info","ts":"2024-08-07T18:58:57.792528Z","caller":"traceutil/trace.go:171","msg":"trace[2089730293] linearizableReadLoop","detail":"{readStateIndex:484; appliedIndex:482; }","duration":"144.556547ms","start":"2024-08-07T18:58:57.647939Z","end":"2024-08-07T18:58:57.792496Z","steps":["trace[2089730293] 'read index received'  (duration: 143.830981ms)","trace[2089730293] 'applied index is now lower than readState.Index'  (duration: 724.933µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T18:58:57.793077Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.112393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-334028-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-08-07T18:58:57.793197Z","caller":"traceutil/trace.go:171","msg":"trace[2017032617] range","detail":"{range_begin:/registry/minions/multinode-334028-m02; range_end:; response_count:1; response_revision:460; }","duration":"145.263142ms","start":"2024-08-07T18:58:57.647916Z","end":"2024-08-07T18:58:57.793179Z","steps":["trace[2017032617] 'agreement among raft nodes before linearized reading'  (duration: 144.704573ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:58:57.793405Z","caller":"traceutil/trace.go:171","msg":"trace[1806786512] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"166.140865ms","start":"2024-08-07T18:58:57.627253Z","end":"2024-08-07T18:58:57.793393Z","steps":["trace[1806786512] 'process raft request'  (duration: 165.147025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T19:00:00.018481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.577582ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705900378134616706 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:59f6912e34631a81>","response":"size:41"}
	{"level":"info","ts":"2024-08-07T19:00:00.019122Z","caller":"traceutil/trace.go:171","msg":"trace[1328949118] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"175.541991ms","start":"2024-08-07T18:59:59.843546Z","end":"2024-08-07T19:00:00.019088Z","steps":["trace[1328949118] 'process raft request'  (duration: 175.367737ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T19:00:00.019312Z","caller":"traceutil/trace.go:171","msg":"trace[1262592073] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:643; }","duration":"240.646893ms","start":"2024-08-07T18:59:59.778652Z","end":"2024-08-07T19:00:00.019299Z","steps":["trace[1262592073] 'read index received'  (duration: 77.260184ms)","trace[1262592073] 'applied index is now lower than readState.Index'  (duration: 163.386138ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T19:00:00.019467Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.804335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-334028-m03\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-08-07T19:00:00.01951Z","caller":"traceutil/trace.go:171","msg":"trace[64142701] range","detail":"{range_begin:/registry/minions/multinode-334028-m03; range_end:; response_count:1; response_revision:602; }","duration":"240.875375ms","start":"2024-08-07T18:59:59.778628Z","end":"2024-08-07T19:00:00.019503Z","steps":["trace[64142701] 'agreement among raft nodes before linearized reading'  (duration: 240.766795ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T19:00:52.849683Z","caller":"traceutil/trace.go:171","msg":"trace[356690587] transaction","detail":"{read_only:false; response_revision:728; number_of_response:1; }","duration":"106.162121ms","start":"2024-08-07T19:00:52.743488Z","end":"2024-08-07T19:00:52.84965Z","steps":["trace[356690587] 'process raft request'  (duration: 106.017597ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T19:03:14.19646Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-07T19:03:14.196579Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-334028","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	{"level":"warn","ts":"2024-08-07T19:03:14.196687Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:03:14.196805Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:03:14.237259Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:03:14.237316Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-07T19:03:14.237375Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"ffc3b7517aaad9f6"}
	{"level":"info","ts":"2024-08-07T19:03:14.243198Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-08-07T19:03:14.243367Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-08-07T19:03:14.24338Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-334028","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	
	
	==> kernel <==
	 19:06:40 up 9 min,  0 users,  load average: 0.28, 0.25, 0.13
	Linux multinode-334028 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2] <==
	I0807 19:05:54.438389       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:06:04.440257       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:06:04.440329       1 main.go:299] handling current node
	I0807 19:06:04.440364       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:06:04.440372       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:06:04.440549       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:06:04.440585       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:06:14.441399       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:06:14.441450       1 main.go:299] handling current node
	I0807 19:06:14.441468       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:06:14.441475       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:06:14.441677       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:06:14.441709       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:06:24.438707       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:06:24.438780       1 main.go:299] handling current node
	I0807 19:06:24.438806       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:06:24.438814       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:06:24.439044       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:06:24.439072       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.2.0/24] 
	I0807 19:06:34.438338       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:06:34.438437       1 main.go:299] handling current node
	I0807 19:06:34.438491       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:06:34.438498       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:06:34.438677       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:06:34.438685       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d] <==
	I0807 19:02:24.745720       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:02:34.755016       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:02:34.755232       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:02:34.755422       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:02:34.755450       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:02:34.755569       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:02:34.755590       1 main.go:299] handling current node
	I0807 19:02:44.750648       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:02:44.750696       1 main.go:299] handling current node
	I0807 19:02:44.750721       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:02:44.750726       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:02:44.750880       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:02:44.750906       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:02:54.752355       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:02:54.752521       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:02:54.752684       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:02:54.752728       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:02:54.752796       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:02:54.752815       1 main.go:299] handling current node
	I0807 19:03:04.748160       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:03:04.748209       1 main.go:299] handling current node
	I0807 19:03:04.748251       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:03:04.748258       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:03:04.748427       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:03:04.748454       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef] <==
	I0807 19:04:57.148180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0807 19:04:57.150307       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 19:04:57.159706       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0807 19:04:57.159848       1 shared_informer.go:320] Caches are synced for configmaps
	I0807 19:04:57.162511       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0807 19:04:57.162606       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0807 19:04:57.162771       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0807 19:04:57.170088       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0807 19:04:57.170615       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 19:04:57.170677       1 policy_source.go:224] refreshing policies
	I0807 19:04:57.171276       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0807 19:04:57.171790       1 aggregator.go:165] initial CRD sync complete...
	I0807 19:04:57.171853       1 autoregister_controller.go:141] Starting autoregister controller
	I0807 19:04:57.171883       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0807 19:04:57.171913       1 cache.go:39] Caches are synced for autoregister controller
	E0807 19:04:57.196220       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0807 19:04:57.249805       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 19:04:58.053754       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0807 19:05:00.329424       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 19:05:00.448432       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 19:05:00.460846       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 19:05:00.525362       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0807 19:05:00.531024       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0807 19:05:10.205251       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0807 19:05:10.251561       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072] <==
	W0807 19:03:14.220566       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.220601       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.220632       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.221442       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.222104       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.222136       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.226506       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227171       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227705       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227788       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227846       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227889       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227917       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228000       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228043       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228081       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228088       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228123       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228126       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228157       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228198       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228230       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228246       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228277       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228293       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823] <==
	I0807 19:05:10.650746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0807 19:05:30.842522       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.213µs"
	I0807 19:05:33.630917       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.207505ms"
	I0807 19:05:33.631233       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.521µs"
	I0807 19:05:33.643336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.738425ms"
	I0807 19:05:33.657123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.280121ms"
	I0807 19:05:33.657244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.424µs"
	I0807 19:05:37.957616       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m02\" does not exist"
	I0807 19:05:37.972630       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m02" podCIDRs=["10.244.1.0/24"]
	I0807 19:05:39.874350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.271µs"
	I0807 19:05:39.888146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.508µs"
	I0807 19:05:39.912485       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.897µs"
	I0807 19:05:39.945260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.787µs"
	I0807 19:05:39.953674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="150.497µs"
	I0807 19:05:39.968260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.056µs"
	I0807 19:05:57.611804       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:05:57.631570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.292µs"
	I0807 19:05:57.646036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.546µs"
	I0807 19:06:01.444898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.061209ms"
	I0807 19:06:01.445186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.991µs"
	I0807 19:06:15.951437       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:06:17.054561       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m03\" does not exist"
	I0807 19:06:17.054649       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:06:17.070057       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m03" podCIDRs=["10.244.2.0/24"]
	I0807 19:06:36.749382       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m03"
	
	
	==> kube-controller-manager [ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88] <==
	I0807 18:58:57.800526       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m02\" does not exist"
	I0807 18:58:57.813589       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m02" podCIDRs=["10.244.1.0/24"]
	I0807 18:58:58.112191       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-334028-m02"
	I0807 18:59:18.518118       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 18:59:20.912998       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.705993ms"
	I0807 18:59:20.938487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.429681ms"
	I0807 18:59:20.965308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.727232ms"
	I0807 18:59:20.965423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.323µs"
	I0807 18:59:24.845699       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.855256ms"
	I0807 18:59:24.845796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.106µs"
	I0807 18:59:24.939652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.725402ms"
	I0807 18:59:24.940282       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.746µs"
	I0807 19:00:00.024013       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m03\" does not exist"
	I0807 19:00:00.024140       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:00:00.060220       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m03" podCIDRs=["10.244.2.0/24"]
	I0807 19:00:03.136267       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-334028-m03"
	I0807 19:00:19.481826       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:00:47.635182       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:00:48.697686       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m03\" does not exist"
	I0807 19:00:48.697831       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:00:48.711394       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m03" podCIDRs=["10.244.3.0/24"]
	I0807 19:01:08.427403       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m03"
	I0807 19:01:53.194780       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:01:53.245692       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.462347ms"
	I0807 19:01:53.245896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.357µs"
	
	
	==> kube-proxy [2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b] <==
	I0807 18:58:11.317138       1 server_linux.go:69] "Using iptables proxy"
	I0807 18:58:11.332511       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0807 18:58:11.368729       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 18:58:11.368761       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 18:58:11.368778       1 server_linux.go:165] "Using iptables Proxier"
	I0807 18:58:11.371919       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 18:58:11.372222       1 server.go:872] "Version info" version="v1.30.3"
	I0807 18:58:11.372256       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:58:11.374327       1 config.go:101] "Starting endpoint slice config controller"
	I0807 18:58:11.374367       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 18:58:11.374663       1 config.go:192] "Starting service config controller"
	I0807 18:58:11.374695       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 18:58:11.375134       1 config.go:319] "Starting node config controller"
	I0807 18:58:11.375141       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 18:58:11.475120       1 shared_informer.go:320] Caches are synced for service config
	I0807 18:58:11.475225       1 shared_informer.go:320] Caches are synced for node config
	I0807 18:58:11.475237       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8] <==
	I0807 19:04:53.752917       1 server_linux.go:69] "Using iptables proxy"
	I0807 19:04:57.153455       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0807 19:04:57.273137       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 19:04:57.273198       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 19:04:57.273217       1 server_linux.go:165] "Using iptables Proxier"
	I0807 19:04:57.278930       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 19:04:57.279290       1 server.go:872] "Version info" version="v1.30.3"
	I0807 19:04:57.279321       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:04:57.284085       1 config.go:192] "Starting service config controller"
	I0807 19:04:57.284127       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 19:04:57.284154       1 config.go:101] "Starting endpoint slice config controller"
	I0807 19:04:57.284158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 19:04:57.286660       1 config.go:319] "Starting node config controller"
	I0807 19:04:57.286688       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 19:04:57.386153       1 shared_informer.go:320] Caches are synced for service config
	I0807 19:04:57.386567       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 19:04:57.387025       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79] <==
	E0807 18:57:52.972092       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:57:52.972318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0807 18:57:52.972352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0807 18:57:53.788120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 18:57:53.788174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0807 18:57:53.872709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 18:57:53.872758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0807 18:57:53.881725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0807 18:57:53.881885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0807 18:57:53.902046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:57:53.902140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:57:53.916819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 18:57:53.916906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:57:54.002529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0807 18:57:54.002573       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 18:57:54.010548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 18:57:54.010595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 18:57:54.146725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 18:57:54.147185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0807 18:57:54.219300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:57:54.219348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0807 18:57:54.393198       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 18:57:54.393319       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0807 18:57:56.559506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0807 19:03:14.206742       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da] <==
	I0807 19:04:54.607677       1 serving.go:380] Generated self-signed cert in-memory
	W0807 19:04:57.093518       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 19:04:57.093565       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 19:04:57.093575       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 19:04:57.093581       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 19:04:57.147765       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0807 19:04:57.147798       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:04:57.157151       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0807 19:04:57.157202       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:04:57.157805       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0807 19:04:57.157874       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 19:04:57.258169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.078187    3880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4217bfac3db5a54109f1d3204e1a41c-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-334028\" (UID: \"c4217bfac3db5a54109f1d3204e1a41c\") " pod="kube-system/kube-controller-manager-multinode-334028"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.078200    3880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ff138ea9e8890a2fb27c64fcd2f5fc58-etcd-data\") pod \"etcd-multinode-334028\" (UID: \"ff138ea9e8890a2fb27c64fcd2f5fc58\") " pod="kube-system/etcd-multinode-334028"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.701053    3880 apiserver.go:52] "Watching apiserver"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.703844    3880 topology_manager.go:215] "Topology Admit Handler" podUID="18ee2fbc-330a-483e-9cb6-8eccc781a058" podNamespace="kube-system" podName="coredns-7db6d8ff4d-582vz"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.704013    3880 topology_manager.go:215] "Topology Admit Handler" podUID="c1a3815e-97c5-48d7-8e76-4f0052a40096" podNamespace="kube-system" podName="storage-provisioner"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.704082    3880 topology_manager.go:215] "Topology Admit Handler" podUID="0fc3b94f-0c9c-4a86-8229-cc904a5e844a" podNamespace="kube-system" podName="kindnet-rwth9"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.704190    3880 topology_manager.go:215] "Topology Admit Handler" podUID="b7f66bee-c532-4132-87a4-d40f6cc2b888" podNamespace="kube-system" podName="kube-proxy-l8zvz"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.704257    3880 topology_manager.go:215] "Topology Admit Handler" podUID="740fe38b-1d09-4860-98d8-d1b7bbec0b6f" podNamespace="default" podName="busybox-fc5497c4f-v64x9"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.739760    3880 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.785255    3880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7f66bee-c532-4132-87a4-d40f6cc2b888-xtables-lock\") pod \"kube-proxy-l8zvz\" (UID: \"b7f66bee-c532-4132-87a4-d40f6cc2b888\") " pod="kube-system/kube-proxy-l8zvz"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.785355    3880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7f66bee-c532-4132-87a4-d40f6cc2b888-lib-modules\") pod \"kube-proxy-l8zvz\" (UID: \"b7f66bee-c532-4132-87a4-d40f6cc2b888\") " pod="kube-system/kube-proxy-l8zvz"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.785454    3880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c1a3815e-97c5-48d7-8e76-4f0052a40096-tmp\") pod \"storage-provisioner\" (UID: \"c1a3815e-97c5-48d7-8e76-4f0052a40096\") " pod="kube-system/storage-provisioner"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.785557    3880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0fc3b94f-0c9c-4a86-8229-cc904a5e844a-cni-cfg\") pod \"kindnet-rwth9\" (UID: \"0fc3b94f-0c9c-4a86-8229-cc904a5e844a\") " pod="kube-system/kindnet-rwth9"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.785622    3880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fc3b94f-0c9c-4a86-8229-cc904a5e844a-lib-modules\") pod \"kindnet-rwth9\" (UID: \"0fc3b94f-0c9c-4a86-8229-cc904a5e844a\") " pod="kube-system/kindnet-rwth9"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: I0807 19:05:00.785672    3880 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fc3b94f-0c9c-4a86-8229-cc904a5e844a-xtables-lock\") pod \"kindnet-rwth9\" (UID: \"0fc3b94f-0c9c-4a86-8229-cc904a5e844a\") " pod="kube-system/kindnet-rwth9"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: E0807 19:05:00.957809    3880 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-multinode-334028\" already exists" pod="kube-system/etcd-multinode-334028"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: E0807 19:05:00.963430    3880 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-334028\" already exists" pod="kube-system/kube-apiserver-multinode-334028"
	Aug 07 19:05:01 multinode-334028 kubelet[3880]: I0807 19:05:01.005994    3880 scope.go:117] "RemoveContainer" containerID="dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0"
	Aug 07 19:05:01 multinode-334028 kubelet[3880]: I0807 19:05:01.007440    3880 scope.go:117] "RemoveContainer" containerID="b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99"
	Aug 07 19:05:09 multinode-334028 kubelet[3880]: I0807 19:05:09.650261    3880 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 07 19:05:59 multinode-334028 kubelet[3880]: E0807 19:05:59.892164    3880 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:05:59 multinode-334028 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:05:59 multinode-334028 kubelet[3880]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:05:59 multinode-334028 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:05:59 multinode-334028 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 19:06:39.330878   63682 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19389-20864/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-334028 -n multinode-334028
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-334028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (330.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334028 stop: exit status 82 (2m0.477683521s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-334028-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-334028 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334028 status: exit status 3 (18.822466036s)

                                                
                                                
-- stdout --
	multinode-334028
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-334028-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 19:09:02.916498   64354 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.119:22: connect: no route to host
	E0807 19:09:02.916548   64354 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.119:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-334028 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-334028 -n multinode-334028
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-334028 logs -n 25: (1.494737061s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m02:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028:/home/docker/cp-test_multinode-334028-m02_multinode-334028.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n multinode-334028 sudo cat                                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_multinode-334028-m02_multinode-334028.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m02:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03:/home/docker/cp-test_multinode-334028-m02_multinode-334028-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n multinode-334028-m03 sudo cat                                   | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_multinode-334028-m02_multinode-334028-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp testdata/cp-test.txt                                                | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m03:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1317190128/001/cp-test_multinode-334028-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m03:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028:/home/docker/cp-test_multinode-334028-m03_multinode-334028.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n multinode-334028 sudo cat                                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_multinode-334028-m03_multinode-334028.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-334028 cp multinode-334028-m03:/home/docker/cp-test.txt                       | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m02:/home/docker/cp-test_multinode-334028-m03_multinode-334028-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n                                                                 | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | multinode-334028-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-334028 ssh -n multinode-334028-m02 sudo cat                                   | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_multinode-334028-m03_multinode-334028-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-334028 node stop m03                                                          | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:00 UTC |
	| node    | multinode-334028 node start                                                             | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:00 UTC | 07 Aug 24 19:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-334028                                                                | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:01 UTC |                     |
	| stop    | -p multinode-334028                                                                     | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:01 UTC |                     |
	| start   | -p multinode-334028                                                                     | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:03 UTC | 07 Aug 24 19:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-334028                                                                | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:06 UTC |                     |
	| node    | multinode-334028 node delete                                                            | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:06 UTC | 07 Aug 24 19:06 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-334028 stop                                                                   | multinode-334028 | jenkins | v1.33.1 | 07 Aug 24 19:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 19:03:13
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 19:03:13.104163   62561 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:03:13.104475   62561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:03:13.104493   62561 out.go:304] Setting ErrFile to fd 2...
	I0807 19:03:13.104498   62561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:03:13.105274   62561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 19:03:13.106229   62561 out.go:298] Setting JSON to false
	I0807 19:03:13.107177   62561 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9939,"bootTime":1723047454,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 19:03:13.107240   62561 start.go:139] virtualization: kvm guest
	I0807 19:03:13.109614   62561 out.go:177] * [multinode-334028] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 19:03:13.111122   62561 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:03:13.111140   62561 notify.go:220] Checking for updates...
	I0807 19:03:13.113644   62561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:03:13.115026   62561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 19:03:13.116310   62561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:03:13.117522   62561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 19:03:13.118718   62561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:03:13.120512   62561 config.go:182] Loaded profile config "multinode-334028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:03:13.120626   62561 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:03:13.121063   62561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:03:13.121138   62561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:03:13.136130   62561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0807 19:03:13.136584   62561 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:03:13.137148   62561 main.go:141] libmachine: Using API Version  1
	I0807 19:03:13.137170   62561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:03:13.137496   62561 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:03:13.137688   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:03:13.174896   62561 out.go:177] * Using the kvm2 driver based on existing profile
	I0807 19:03:13.176338   62561 start.go:297] selected driver: kvm2
	I0807 19:03:13.176353   62561 start.go:901] validating driver "kvm2" against &{Name:multinode-334028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-334028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.119 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:03:13.176492   62561 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:03:13.176798   62561 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:03:13.176879   62561 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 19:03:13.192168   62561 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 19:03:13.192974   62561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 19:03:13.193050   62561 cni.go:84] Creating CNI manager for ""
	I0807 19:03:13.193062   62561 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 19:03:13.193139   62561 start.go:340] cluster config:
	{Name:multinode-334028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-334028 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.119 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:03:13.193275   62561 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:03:13.196045   62561 out.go:177] * Starting "multinode-334028" primary control-plane node in "multinode-334028" cluster
	I0807 19:03:13.197419   62561 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:03:13.197461   62561 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 19:03:13.197470   62561 cache.go:56] Caching tarball of preloaded images
	I0807 19:03:13.197558   62561 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 19:03:13.197571   62561 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 19:03:13.197686   62561 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/config.json ...
	I0807 19:03:13.197934   62561 start.go:360] acquireMachinesLock for multinode-334028: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 19:03:13.197983   62561 start.go:364] duration metric: took 29.135µs to acquireMachinesLock for "multinode-334028"
	I0807 19:03:13.198006   62561 start.go:96] Skipping create...Using existing machine configuration
	I0807 19:03:13.198016   62561 fix.go:54] fixHost starting: 
	I0807 19:03:13.198309   62561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:03:13.198347   62561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:03:13.213169   62561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0807 19:03:13.213617   62561 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:03:13.214105   62561 main.go:141] libmachine: Using API Version  1
	I0807 19:03:13.214127   62561 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:03:13.214422   62561 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:03:13.214627   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:03:13.214777   62561 main.go:141] libmachine: (multinode-334028) Calling .GetState
	I0807 19:03:13.216263   62561 fix.go:112] recreateIfNeeded on multinode-334028: state=Running err=<nil>
	W0807 19:03:13.216283   62561 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 19:03:13.218855   62561 out.go:177] * Updating the running kvm2 "multinode-334028" VM ...
	I0807 19:03:13.220051   62561 machine.go:94] provisionDockerMachine start ...
	I0807 19:03:13.220074   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:03:13.220285   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.222913   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.223258   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.223286   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.223455   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:13.223652   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.223809   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.223935   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:13.224078   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:03:13.224295   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:03:13.224306   62561 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 19:03:13.338146   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-334028
	
	I0807 19:03:13.338175   62561 main.go:141] libmachine: (multinode-334028) Calling .GetMachineName
	I0807 19:03:13.338423   62561 buildroot.go:166] provisioning hostname "multinode-334028"
	I0807 19:03:13.338450   62561 main.go:141] libmachine: (multinode-334028) Calling .GetMachineName
	I0807 19:03:13.338627   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.341313   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.341646   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.341685   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.341778   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:13.342001   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.342158   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.342303   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:13.342416   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:03:13.342650   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:03:13.342665   62561 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-334028 && echo "multinode-334028" | sudo tee /etc/hostname
	I0807 19:03:13.473834   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-334028
	
	I0807 19:03:13.473881   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.476966   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.477394   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.477432   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.477674   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:13.477864   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.478020   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.478159   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:13.478333   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:03:13.478529   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:03:13.478552   62561 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-334028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-334028/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-334028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:03:13.589563   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:03:13.589589   62561 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 19:03:13.589610   62561 buildroot.go:174] setting up certificates
	I0807 19:03:13.589621   62561 provision.go:84] configureAuth start
	I0807 19:03:13.589631   62561 main.go:141] libmachine: (multinode-334028) Calling .GetMachineName
	I0807 19:03:13.589964   62561 main.go:141] libmachine: (multinode-334028) Calling .GetIP
	I0807 19:03:13.593015   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.593367   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.593396   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.593547   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.595856   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.596236   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.596262   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.596363   62561 provision.go:143] copyHostCerts
	I0807 19:03:13.596406   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 19:03:13.596441   62561 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 19:03:13.596450   62561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 19:03:13.596511   62561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 19:03:13.596598   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 19:03:13.596616   62561 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 19:03:13.596623   62561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 19:03:13.596646   62561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 19:03:13.596721   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 19:03:13.596745   62561 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 19:03:13.596754   62561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 19:03:13.596792   62561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 19:03:13.596903   62561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.multinode-334028 san=[127.0.0.1 192.168.39.165 localhost minikube multinode-334028]
	I0807 19:03:13.908252   62561 provision.go:177] copyRemoteCerts
	I0807 19:03:13.908300   62561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:03:13.908320   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:13.911086   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.911445   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:13.911472   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:13.911604   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:13.911810   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:13.911989   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:13.912161   62561 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028/id_rsa Username:docker}
	I0807 19:03:14.000790   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0807 19:03:14.000868   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0807 19:03:14.027352   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0807 19:03:14.027438   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 19:03:14.053053   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0807 19:03:14.053149   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:03:14.079199   62561 provision.go:87] duration metric: took 489.565657ms to configureAuth
	I0807 19:03:14.079230   62561 buildroot.go:189] setting minikube options for container-runtime
	I0807 19:03:14.079506   62561 config.go:182] Loaded profile config "multinode-334028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:03:14.079574   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:03:14.082171   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:14.082575   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:03:14.082611   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:03:14.082726   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:03:14.082935   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:14.083142   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:03:14.083283   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:03:14.083459   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:03:14.083632   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:03:14.083652   62561 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 19:04:44.938259   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 19:04:44.938313   62561 machine.go:97] duration metric: took 1m31.718245018s to provisionDockerMachine
	I0807 19:04:44.938336   62561 start.go:293] postStartSetup for "multinode-334028" (driver="kvm2")
	I0807 19:04:44.938367   62561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:04:44.938401   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:44.938805   62561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:04:44.938841   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:04:44.941641   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:44.942157   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:44.942183   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:44.942354   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:04:44.942534   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:44.942681   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:04:44.942808   62561 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028/id_rsa Username:docker}
	I0807 19:04:45.028125   62561 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:04:45.032378   62561 command_runner.go:130] > NAME=Buildroot
	I0807 19:04:45.032397   62561 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0807 19:04:45.032401   62561 command_runner.go:130] > ID=buildroot
	I0807 19:04:45.032406   62561 command_runner.go:130] > VERSION_ID=2023.02.9
	I0807 19:04:45.032410   62561 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0807 19:04:45.032448   62561 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 19:04:45.032471   62561 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 19:04:45.032540   62561 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 19:04:45.032646   62561 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 19:04:45.032657   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /etc/ssl/certs/280522.pem
	I0807 19:04:45.032776   62561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:04:45.042268   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:04:45.066739   62561 start.go:296] duration metric: took 128.385682ms for postStartSetup
	I0807 19:04:45.066789   62561 fix.go:56] duration metric: took 1m31.868773792s for fixHost
	I0807 19:04:45.066812   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:04:45.069537   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.069885   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:45.069914   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.070103   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:04:45.070313   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:45.070484   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:45.070678   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:04:45.070843   62561 main.go:141] libmachine: Using SSH client type: native
	I0807 19:04:45.071054   62561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0807 19:04:45.071071   62561 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 19:04:45.177010   62561 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723057485.147190184
	
	I0807 19:04:45.177029   62561 fix.go:216] guest clock: 1723057485.147190184
	I0807 19:04:45.177041   62561 fix.go:229] Guest: 2024-08-07 19:04:45.147190184 +0000 UTC Remote: 2024-08-07 19:04:45.066795772 +0000 UTC m=+91.996777253 (delta=80.394412ms)
	I0807 19:04:45.177071   62561 fix.go:200] guest clock delta is within tolerance: 80.394412ms
	I0807 19:04:45.177081   62561 start.go:83] releasing machines lock for "multinode-334028", held for 1m31.979083311s
	I0807 19:04:45.177109   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:45.177416   62561 main.go:141] libmachine: (multinode-334028) Calling .GetIP
	I0807 19:04:45.179985   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.180377   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:45.180406   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.180615   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:45.181084   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:45.181206   62561 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:04:45.181302   62561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 19:04:45.181343   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:04:45.181443   62561 ssh_runner.go:195] Run: cat /version.json
	I0807 19:04:45.181467   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:04:45.183901   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.184174   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.184261   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:45.184294   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.184408   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:04:45.184546   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:45.184577   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:45.184585   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:45.184745   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:04:45.184755   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:04:45.184884   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:04:45.184924   62561 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028/id_rsa Username:docker}
	I0807 19:04:45.185004   62561 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:04:45.185124   62561 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028/id_rsa Username:docker}
	I0807 19:04:45.261525   62561 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0807 19:04:45.261741   62561 ssh_runner.go:195] Run: systemctl --version
	I0807 19:04:45.285606   62561 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0807 19:04:45.285664   62561 command_runner.go:130] > systemd 252 (252)
	I0807 19:04:45.285691   62561 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0807 19:04:45.285742   62561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 19:04:45.450249   62561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0807 19:04:45.456241   62561 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0807 19:04:45.456285   62561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 19:04:45.456324   62561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:04:45.466082   62561 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 19:04:45.466103   62561 start.go:495] detecting cgroup driver to use...
	I0807 19:04:45.466172   62561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 19:04:45.483826   62561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:04:45.498722   62561 docker.go:217] disabling cri-docker service (if available) ...
	I0807 19:04:45.498788   62561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 19:04:45.513573   62561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 19:04:45.527955   62561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 19:04:45.674021   62561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 19:04:45.821302   62561 docker.go:233] disabling docker service ...
	I0807 19:04:45.821375   62561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 19:04:45.840625   62561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 19:04:45.855014   62561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 19:04:45.996881   62561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 19:04:46.144443   62561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 19:04:46.159398   62561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:04:46.178240   62561 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0807 19:04:46.178278   62561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 19:04:46.178320   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.189411   62561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 19:04:46.189477   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.200374   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.211826   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.222485   62561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:04:46.233933   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.244713   62561 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.255921   62561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:04:46.267281   62561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:04:46.277324   62561 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0807 19:04:46.277440   62561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:04:46.287285   62561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:04:46.425101   62561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 19:04:46.940793   62561 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 19:04:46.940874   62561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 19:04:46.945728   62561 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0807 19:04:46.945755   62561 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0807 19:04:46.945764   62561 command_runner.go:130] > Device: 0,22	Inode: 1367        Links: 1
	I0807 19:04:46.945775   62561 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 19:04:46.945782   62561 command_runner.go:130] > Access: 2024-08-07 19:04:46.804699657 +0000
	I0807 19:04:46.945796   62561 command_runner.go:130] > Modify: 2024-08-07 19:04:46.804699657 +0000
	I0807 19:04:46.945804   62561 command_runner.go:130] > Change: 2024-08-07 19:04:46.804699657 +0000
	I0807 19:04:46.945809   62561 command_runner.go:130] >  Birth: -
	I0807 19:04:46.945848   62561 start.go:563] Will wait 60s for crictl version
	I0807 19:04:46.945893   62561 ssh_runner.go:195] Run: which crictl
	I0807 19:04:46.949650   62561 command_runner.go:130] > /usr/bin/crictl
	I0807 19:04:46.949710   62561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:04:46.988867   62561 command_runner.go:130] > Version:  0.1.0
	I0807 19:04:46.988983   62561 command_runner.go:130] > RuntimeName:  cri-o
	I0807 19:04:46.989060   62561 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0807 19:04:46.989115   62561 command_runner.go:130] > RuntimeApiVersion:  v1
	I0807 19:04:46.990356   62561 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 19:04:46.990431   62561 ssh_runner.go:195] Run: crio --version
	I0807 19:04:47.020531   62561 command_runner.go:130] > crio version 1.29.1
	I0807 19:04:47.020552   62561 command_runner.go:130] > Version:        1.29.1
	I0807 19:04:47.020558   62561 command_runner.go:130] > GitCommit:      unknown
	I0807 19:04:47.020562   62561 command_runner.go:130] > GitCommitDate:  unknown
	I0807 19:04:47.020572   62561 command_runner.go:130] > GitTreeState:   clean
	I0807 19:04:47.020577   62561 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0807 19:04:47.020581   62561 command_runner.go:130] > GoVersion:      go1.21.6
	I0807 19:04:47.020585   62561 command_runner.go:130] > Compiler:       gc
	I0807 19:04:47.020590   62561 command_runner.go:130] > Platform:       linux/amd64
	I0807 19:04:47.020595   62561 command_runner.go:130] > Linkmode:       dynamic
	I0807 19:04:47.020602   62561 command_runner.go:130] > BuildTags:      
	I0807 19:04:47.020608   62561 command_runner.go:130] >   containers_image_ostree_stub
	I0807 19:04:47.020614   62561 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0807 19:04:47.020620   62561 command_runner.go:130] >   btrfs_noversion
	I0807 19:04:47.020628   62561 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0807 19:04:47.020638   62561 command_runner.go:130] >   libdm_no_deferred_remove
	I0807 19:04:47.020644   62561 command_runner.go:130] >   seccomp
	I0807 19:04:47.020650   62561 command_runner.go:130] > LDFlags:          unknown
	I0807 19:04:47.020657   62561 command_runner.go:130] > SeccompEnabled:   true
	I0807 19:04:47.020664   62561 command_runner.go:130] > AppArmorEnabled:  false
	I0807 19:04:47.020760   62561 ssh_runner.go:195] Run: crio --version
	I0807 19:04:47.049327   62561 command_runner.go:130] > crio version 1.29.1
	I0807 19:04:47.049354   62561 command_runner.go:130] > Version:        1.29.1
	I0807 19:04:47.049362   62561 command_runner.go:130] > GitCommit:      unknown
	I0807 19:04:47.049369   62561 command_runner.go:130] > GitCommitDate:  unknown
	I0807 19:04:47.049376   62561 command_runner.go:130] > GitTreeState:   clean
	I0807 19:04:47.049384   62561 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0807 19:04:47.049391   62561 command_runner.go:130] > GoVersion:      go1.21.6
	I0807 19:04:47.049397   62561 command_runner.go:130] > Compiler:       gc
	I0807 19:04:47.049404   62561 command_runner.go:130] > Platform:       linux/amd64
	I0807 19:04:47.049412   62561 command_runner.go:130] > Linkmode:       dynamic
	I0807 19:04:47.049421   62561 command_runner.go:130] > BuildTags:      
	I0807 19:04:47.049430   62561 command_runner.go:130] >   containers_image_ostree_stub
	I0807 19:04:47.049434   62561 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0807 19:04:47.049439   62561 command_runner.go:130] >   btrfs_noversion
	I0807 19:04:47.049445   62561 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0807 19:04:47.049455   62561 command_runner.go:130] >   libdm_no_deferred_remove
	I0807 19:04:47.049460   62561 command_runner.go:130] >   seccomp
	I0807 19:04:47.049467   62561 command_runner.go:130] > LDFlags:          unknown
	I0807 19:04:47.049476   62561 command_runner.go:130] > SeccompEnabled:   true
	I0807 19:04:47.049485   62561 command_runner.go:130] > AppArmorEnabled:  false
	I0807 19:04:47.051588   62561 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 19:04:47.052984   62561 main.go:141] libmachine: (multinode-334028) Calling .GetIP
	I0807 19:04:47.055837   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:47.056213   62561 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:04:47.056243   62561 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:04:47.056481   62561 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0807 19:04:47.060533   62561 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0807 19:04:47.060724   62561 kubeadm.go:883] updating cluster {Name:multinode-334028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-334028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.119 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:04:47.060866   62561 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:04:47.060946   62561 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:04:47.116977   62561 command_runner.go:130] > {
	I0807 19:04:47.117002   62561 command_runner.go:130] >   "images": [
	I0807 19:04:47.117006   62561 command_runner.go:130] >     {
	I0807 19:04:47.117014   62561 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0807 19:04:47.117019   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117025   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0807 19:04:47.117029   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117036   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117058   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0807 19:04:47.117073   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0807 19:04:47.117080   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117092   62561 command_runner.go:130] >       "size": "87165492",
	I0807 19:04:47.117102   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117110   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117117   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117122   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117126   62561 command_runner.go:130] >     },
	I0807 19:04:47.117130   62561 command_runner.go:130] >     {
	I0807 19:04:47.117136   62561 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0807 19:04:47.117141   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117146   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0807 19:04:47.117151   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117155   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117166   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0807 19:04:47.117176   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0807 19:04:47.117181   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117186   62561 command_runner.go:130] >       "size": "87165492",
	I0807 19:04:47.117190   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117203   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117217   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117225   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117229   62561 command_runner.go:130] >     },
	I0807 19:04:47.117233   62561 command_runner.go:130] >     {
	I0807 19:04:47.117239   62561 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0807 19:04:47.117246   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117252   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0807 19:04:47.117258   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117262   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117272   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0807 19:04:47.117279   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0807 19:04:47.117285   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117289   62561 command_runner.go:130] >       "size": "1363676",
	I0807 19:04:47.117295   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117300   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117306   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117310   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117317   62561 command_runner.go:130] >     },
	I0807 19:04:47.117320   62561 command_runner.go:130] >     {
	I0807 19:04:47.117329   62561 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0807 19:04:47.117336   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117341   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0807 19:04:47.117347   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117351   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117361   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0807 19:04:47.117379   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0807 19:04:47.117387   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117391   62561 command_runner.go:130] >       "size": "31470524",
	I0807 19:04:47.117398   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117402   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117408   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117412   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117418   62561 command_runner.go:130] >     },
	I0807 19:04:47.117422   62561 command_runner.go:130] >     {
	I0807 19:04:47.117431   62561 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0807 19:04:47.117437   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117448   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0807 19:04:47.117455   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117460   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117470   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0807 19:04:47.117479   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0807 19:04:47.117485   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117490   62561 command_runner.go:130] >       "size": "61245718",
	I0807 19:04:47.117497   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117501   62561 command_runner.go:130] >       "username": "nonroot",
	I0807 19:04:47.117508   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117512   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117518   62561 command_runner.go:130] >     },
	I0807 19:04:47.117522   62561 command_runner.go:130] >     {
	I0807 19:04:47.117529   62561 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0807 19:04:47.117535   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117540   62561 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0807 19:04:47.117546   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117551   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117558   62561 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0807 19:04:47.117566   62561 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0807 19:04:47.117573   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117577   62561 command_runner.go:130] >       "size": "150779692",
	I0807 19:04:47.117584   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.117588   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.117594   62561 command_runner.go:130] >       },
	I0807 19:04:47.117598   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117604   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117609   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117612   62561 command_runner.go:130] >     },
	I0807 19:04:47.117616   62561 command_runner.go:130] >     {
	I0807 19:04:47.117622   62561 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0807 19:04:47.117628   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117634   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0807 19:04:47.117638   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117643   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117652   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0807 19:04:47.117665   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0807 19:04:47.117672   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117676   62561 command_runner.go:130] >       "size": "117609954",
	I0807 19:04:47.117680   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.117687   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.117691   62561 command_runner.go:130] >       },
	I0807 19:04:47.117697   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117702   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117708   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117711   62561 command_runner.go:130] >     },
	I0807 19:04:47.117715   62561 command_runner.go:130] >     {
	I0807 19:04:47.117721   62561 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0807 19:04:47.117726   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117731   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0807 19:04:47.117737   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117741   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117761   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0807 19:04:47.117774   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0807 19:04:47.117781   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117786   62561 command_runner.go:130] >       "size": "112198984",
	I0807 19:04:47.117792   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.117796   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.117825   62561 command_runner.go:130] >       },
	I0807 19:04:47.117832   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117837   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117841   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117845   62561 command_runner.go:130] >     },
	I0807 19:04:47.117848   62561 command_runner.go:130] >     {
	I0807 19:04:47.117854   62561 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0807 19:04:47.117858   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117863   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0807 19:04:47.117866   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117870   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117877   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0807 19:04:47.117884   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0807 19:04:47.117887   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117897   62561 command_runner.go:130] >       "size": "85953945",
	I0807 19:04:47.117902   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.117905   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117909   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.117912   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.117915   62561 command_runner.go:130] >     },
	I0807 19:04:47.117918   62561 command_runner.go:130] >     {
	I0807 19:04:47.117924   62561 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0807 19:04:47.117930   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.117935   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0807 19:04:47.117938   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117942   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.117949   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0807 19:04:47.117959   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0807 19:04:47.117963   62561 command_runner.go:130] >       ],
	I0807 19:04:47.117969   62561 command_runner.go:130] >       "size": "63051080",
	I0807 19:04:47.117973   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.117979   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.117983   62561 command_runner.go:130] >       },
	I0807 19:04:47.117990   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.117999   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.118006   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.118009   62561 command_runner.go:130] >     },
	I0807 19:04:47.118013   62561 command_runner.go:130] >     {
	I0807 19:04:47.118019   62561 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0807 19:04:47.118038   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.118051   62561 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0807 19:04:47.118060   62561 command_runner.go:130] >       ],
	I0807 19:04:47.118065   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.118074   62561 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0807 19:04:47.118081   62561 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0807 19:04:47.118088   62561 command_runner.go:130] >       ],
	I0807 19:04:47.118093   62561 command_runner.go:130] >       "size": "750414",
	I0807 19:04:47.118096   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.118102   62561 command_runner.go:130] >         "value": "65535"
	I0807 19:04:47.118106   62561 command_runner.go:130] >       },
	I0807 19:04:47.118119   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.118124   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.118128   62561 command_runner.go:130] >       "pinned": true
	I0807 19:04:47.118131   62561 command_runner.go:130] >     }
	I0807 19:04:47.118134   62561 command_runner.go:130] >   ]
	I0807 19:04:47.118138   62561 command_runner.go:130] > }
	I0807 19:04:47.118327   62561 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:04:47.118339   62561 crio.go:433] Images already preloaded, skipping extraction
	I0807 19:04:47.118388   62561 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:04:47.153558   62561 command_runner.go:130] > {
	I0807 19:04:47.153582   62561 command_runner.go:130] >   "images": [
	I0807 19:04:47.153587   62561 command_runner.go:130] >     {
	I0807 19:04:47.153595   62561 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0807 19:04:47.153606   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.153613   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0807 19:04:47.153618   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153625   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.153658   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0807 19:04:47.153676   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0807 19:04:47.153682   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153686   62561 command_runner.go:130] >       "size": "87165492",
	I0807 19:04:47.153690   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.153693   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.153700   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.153705   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.153708   62561 command_runner.go:130] >     },
	I0807 19:04:47.153711   62561 command_runner.go:130] >     {
	I0807 19:04:47.153717   62561 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0807 19:04:47.153721   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.153730   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0807 19:04:47.153736   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153742   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.153757   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0807 19:04:47.153772   62561 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0807 19:04:47.153779   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153787   62561 command_runner.go:130] >       "size": "87165492",
	I0807 19:04:47.153794   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.153800   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.153806   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.153810   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.153814   62561 command_runner.go:130] >     },
	I0807 19:04:47.153820   62561 command_runner.go:130] >     {
	I0807 19:04:47.153834   62561 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0807 19:04:47.153840   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.153852   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0807 19:04:47.153861   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153870   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.153884   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0807 19:04:47.153899   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0807 19:04:47.153907   62561 command_runner.go:130] >       ],
	I0807 19:04:47.153916   62561 command_runner.go:130] >       "size": "1363676",
	I0807 19:04:47.153926   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.153935   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.153943   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.153952   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.153961   62561 command_runner.go:130] >     },
	I0807 19:04:47.153969   62561 command_runner.go:130] >     {
	I0807 19:04:47.153982   62561 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0807 19:04:47.153989   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.153994   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0807 19:04:47.154003   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154013   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154025   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0807 19:04:47.154044   62561 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0807 19:04:47.154053   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154062   62561 command_runner.go:130] >       "size": "31470524",
	I0807 19:04:47.154069   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.154078   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154083   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154088   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154095   62561 command_runner.go:130] >     },
	I0807 19:04:47.154104   62561 command_runner.go:130] >     {
	I0807 19:04:47.154114   62561 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0807 19:04:47.154124   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154135   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0807 19:04:47.154143   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154153   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154173   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0807 19:04:47.154185   62561 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0807 19:04:47.154193   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154201   62561 command_runner.go:130] >       "size": "61245718",
	I0807 19:04:47.154211   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.154221   62561 command_runner.go:130] >       "username": "nonroot",
	I0807 19:04:47.154230   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154239   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154249   62561 command_runner.go:130] >     },
	I0807 19:04:47.154257   62561 command_runner.go:130] >     {
	I0807 19:04:47.154267   62561 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0807 19:04:47.154273   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154278   62561 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0807 19:04:47.154287   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154296   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154307   62561 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0807 19:04:47.154322   62561 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0807 19:04:47.154330   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154339   62561 command_runner.go:130] >       "size": "150779692",
	I0807 19:04:47.154348   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154357   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.154364   62561 command_runner.go:130] >       },
	I0807 19:04:47.154368   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154376   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154382   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154390   62561 command_runner.go:130] >     },
	I0807 19:04:47.154396   62561 command_runner.go:130] >     {
	I0807 19:04:47.154410   62561 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0807 19:04:47.154417   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154425   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0807 19:04:47.154430   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154436   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154447   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0807 19:04:47.154457   62561 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0807 19:04:47.154462   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154468   62561 command_runner.go:130] >       "size": "117609954",
	I0807 19:04:47.154473   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154480   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.154484   62561 command_runner.go:130] >       },
	I0807 19:04:47.154490   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154495   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154501   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154507   62561 command_runner.go:130] >     },
	I0807 19:04:47.154511   62561 command_runner.go:130] >     {
	I0807 19:04:47.154522   62561 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0807 19:04:47.154529   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154538   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0807 19:04:47.154547   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154554   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154576   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0807 19:04:47.154590   62561 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0807 19:04:47.154596   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154600   62561 command_runner.go:130] >       "size": "112198984",
	I0807 19:04:47.154606   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154610   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.154613   62561 command_runner.go:130] >       },
	I0807 19:04:47.154617   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154622   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154625   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154631   62561 command_runner.go:130] >     },
	I0807 19:04:47.154634   62561 command_runner.go:130] >     {
	I0807 19:04:47.154640   62561 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0807 19:04:47.154646   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154651   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0807 19:04:47.154657   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154660   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154669   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0807 19:04:47.154678   62561 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0807 19:04:47.154683   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154687   62561 command_runner.go:130] >       "size": "85953945",
	I0807 19:04:47.154693   62561 command_runner.go:130] >       "uid": null,
	I0807 19:04:47.154697   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154701   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154707   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154710   62561 command_runner.go:130] >     },
	I0807 19:04:47.154722   62561 command_runner.go:130] >     {
	I0807 19:04:47.154730   62561 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0807 19:04:47.154736   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154743   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0807 19:04:47.154749   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154754   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154763   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0807 19:04:47.154772   62561 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0807 19:04:47.154778   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154782   62561 command_runner.go:130] >       "size": "63051080",
	I0807 19:04:47.154785   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154791   62561 command_runner.go:130] >         "value": "0"
	I0807 19:04:47.154795   62561 command_runner.go:130] >       },
	I0807 19:04:47.154800   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154804   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154810   62561 command_runner.go:130] >       "pinned": false
	I0807 19:04:47.154814   62561 command_runner.go:130] >     },
	I0807 19:04:47.154819   62561 command_runner.go:130] >     {
	I0807 19:04:47.154825   62561 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0807 19:04:47.154831   62561 command_runner.go:130] >       "repoTags": [
	I0807 19:04:47.154835   62561 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0807 19:04:47.154838   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154842   62561 command_runner.go:130] >       "repoDigests": [
	I0807 19:04:47.154848   62561 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0807 19:04:47.154857   62561 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0807 19:04:47.154863   62561 command_runner.go:130] >       ],
	I0807 19:04:47.154867   62561 command_runner.go:130] >       "size": "750414",
	I0807 19:04:47.154872   62561 command_runner.go:130] >       "uid": {
	I0807 19:04:47.154878   62561 command_runner.go:130] >         "value": "65535"
	I0807 19:04:47.154881   62561 command_runner.go:130] >       },
	I0807 19:04:47.154888   62561 command_runner.go:130] >       "username": "",
	I0807 19:04:47.154891   62561 command_runner.go:130] >       "spec": null,
	I0807 19:04:47.154898   62561 command_runner.go:130] >       "pinned": true
	I0807 19:04:47.154901   62561 command_runner.go:130] >     }
	I0807 19:04:47.154909   62561 command_runner.go:130] >   ]
	I0807 19:04:47.154912   62561 command_runner.go:130] > }
	I0807 19:04:47.155021   62561 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:04:47.155032   62561 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:04:47.155038   62561 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.30.3 crio true true} ...
	I0807 19:04:47.155136   62561 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-334028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-334028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:04:47.155199   62561 ssh_runner.go:195] Run: crio config
	I0807 19:04:47.197573   62561 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0807 19:04:47.197604   62561 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0807 19:04:47.197614   62561 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0807 19:04:47.197620   62561 command_runner.go:130] > #
	I0807 19:04:47.197631   62561 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0807 19:04:47.197642   62561 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0807 19:04:47.197652   62561 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0807 19:04:47.197662   62561 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0807 19:04:47.197667   62561 command_runner.go:130] > # reload'.
	I0807 19:04:47.197675   62561 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0807 19:04:47.197688   62561 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0807 19:04:47.197700   62561 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0807 19:04:47.197712   62561 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0807 19:04:47.197720   62561 command_runner.go:130] > [crio]
	I0807 19:04:47.197729   62561 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0807 19:04:47.197739   62561 command_runner.go:130] > # containers images, in this directory.
	I0807 19:04:47.197814   62561 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0807 19:04:47.197837   62561 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0807 19:04:47.197847   62561 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0807 19:04:47.197860   62561 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0807 19:04:47.197870   62561 command_runner.go:130] > # imagestore = ""
	I0807 19:04:47.197883   62561 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0807 19:04:47.197896   62561 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0807 19:04:47.197906   62561 command_runner.go:130] > storage_driver = "overlay"
	I0807 19:04:47.197916   62561 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0807 19:04:47.197970   62561 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0807 19:04:47.197983   62561 command_runner.go:130] > storage_option = [
	I0807 19:04:47.198017   62561 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0807 19:04:47.198028   62561 command_runner.go:130] > ]
	I0807 19:04:47.198038   62561 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0807 19:04:47.198051   62561 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0807 19:04:47.198075   62561 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0807 19:04:47.198087   62561 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0807 19:04:47.198097   62561 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0807 19:04:47.198108   62561 command_runner.go:130] > # always happen on a node reboot
	I0807 19:04:47.198115   62561 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0807 19:04:47.198135   62561 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0807 19:04:47.198148   62561 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0807 19:04:47.198167   62561 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0807 19:04:47.198179   62561 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0807 19:04:47.198194   62561 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0807 19:04:47.198210   62561 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0807 19:04:47.198219   62561 command_runner.go:130] > # internal_wipe = true
	I0807 19:04:47.198235   62561 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0807 19:04:47.198247   62561 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0807 19:04:47.198257   62561 command_runner.go:130] > # internal_repair = false
	I0807 19:04:47.198268   62561 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0807 19:04:47.198280   62561 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0807 19:04:47.198293   62561 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0807 19:04:47.198305   62561 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0807 19:04:47.198319   62561 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0807 19:04:47.198328   62561 command_runner.go:130] > [crio.api]
	I0807 19:04:47.198339   62561 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0807 19:04:47.198348   62561 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0807 19:04:47.198356   62561 command_runner.go:130] > # IP address on which the stream server will listen.
	I0807 19:04:47.198368   62561 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0807 19:04:47.198381   62561 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0807 19:04:47.198388   62561 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0807 19:04:47.198397   62561 command_runner.go:130] > # stream_port = "0"
	I0807 19:04:47.198408   62561 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0807 19:04:47.198418   62561 command_runner.go:130] > # stream_enable_tls = false
	I0807 19:04:47.198427   62561 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0807 19:04:47.198439   62561 command_runner.go:130] > # stream_idle_timeout = ""
	I0807 19:04:47.198452   62561 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0807 19:04:47.198461   62561 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0807 19:04:47.198470   62561 command_runner.go:130] > # minutes.
	I0807 19:04:47.198476   62561 command_runner.go:130] > # stream_tls_cert = ""
	I0807 19:04:47.198496   62561 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0807 19:04:47.198508   62561 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0807 19:04:47.198518   62561 command_runner.go:130] > # stream_tls_key = ""
	I0807 19:04:47.198527   62561 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0807 19:04:47.198541   62561 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0807 19:04:47.198566   62561 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0807 19:04:47.198575   62561 command_runner.go:130] > # stream_tls_ca = ""
	I0807 19:04:47.198587   62561 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0807 19:04:47.198597   62561 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0807 19:04:47.198608   62561 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0807 19:04:47.198618   62561 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0807 19:04:47.198627   62561 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0807 19:04:47.198639   62561 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0807 19:04:47.198648   62561 command_runner.go:130] > [crio.runtime]
	I0807 19:04:47.198657   62561 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0807 19:04:47.198667   62561 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0807 19:04:47.198674   62561 command_runner.go:130] > # "nofile=1024:2048"
	I0807 19:04:47.198684   62561 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0807 19:04:47.198693   62561 command_runner.go:130] > # default_ulimits = [
	I0807 19:04:47.198698   62561 command_runner.go:130] > # ]
	I0807 19:04:47.198710   62561 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0807 19:04:47.198719   62561 command_runner.go:130] > # no_pivot = false
	I0807 19:04:47.198728   62561 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0807 19:04:47.198740   62561 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0807 19:04:47.198751   62561 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0807 19:04:47.198762   62561 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0807 19:04:47.198773   62561 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0807 19:04:47.198785   62561 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0807 19:04:47.198796   62561 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0807 19:04:47.198806   62561 command_runner.go:130] > # Cgroup setting for conmon
	I0807 19:04:47.198817   62561 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0807 19:04:47.198827   62561 command_runner.go:130] > conmon_cgroup = "pod"
	I0807 19:04:47.198836   62561 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0807 19:04:47.198847   62561 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0807 19:04:47.198857   62561 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0807 19:04:47.198867   62561 command_runner.go:130] > conmon_env = [
	I0807 19:04:47.198884   62561 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0807 19:04:47.198894   62561 command_runner.go:130] > ]
	I0807 19:04:47.198904   62561 command_runner.go:130] > # Additional environment variables to set for all the
	I0807 19:04:47.198916   62561 command_runner.go:130] > # containers. These are overridden if set in the
	I0807 19:04:47.198929   62561 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0807 19:04:47.198938   62561 command_runner.go:130] > # default_env = [
	I0807 19:04:47.198943   62561 command_runner.go:130] > # ]
	I0807 19:04:47.198955   62561 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0807 19:04:47.198969   62561 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0807 19:04:47.198980   62561 command_runner.go:130] > # selinux = false
	I0807 19:04:47.198994   62561 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0807 19:04:47.199007   62561 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0807 19:04:47.199018   62561 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0807 19:04:47.199028   62561 command_runner.go:130] > # seccomp_profile = ""
	I0807 19:04:47.199038   62561 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0807 19:04:47.199050   62561 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0807 19:04:47.199062   62561 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0807 19:04:47.199072   62561 command_runner.go:130] > # which might increase security.
	I0807 19:04:47.199081   62561 command_runner.go:130] > # This option is currently deprecated,
	I0807 19:04:47.199091   62561 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0807 19:04:47.199099   62561 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0807 19:04:47.199109   62561 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0807 19:04:47.199123   62561 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0807 19:04:47.199137   62561 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0807 19:04:47.199151   62561 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0807 19:04:47.199164   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.199175   62561 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0807 19:04:47.199185   62561 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0807 19:04:47.199196   62561 command_runner.go:130] > # the cgroup blockio controller.
	I0807 19:04:47.199202   62561 command_runner.go:130] > # blockio_config_file = ""
	I0807 19:04:47.199215   62561 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0807 19:04:47.199224   62561 command_runner.go:130] > # blockio parameters.
	I0807 19:04:47.199231   62561 command_runner.go:130] > # blockio_reload = false
	I0807 19:04:47.199244   62561 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0807 19:04:47.199253   62561 command_runner.go:130] > # irqbalance daemon.
	I0807 19:04:47.199262   62561 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0807 19:04:47.199282   62561 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0807 19:04:47.199297   62561 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0807 19:04:47.199311   62561 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0807 19:04:47.199325   62561 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0807 19:04:47.199337   62561 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0807 19:04:47.199344   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.199350   62561 command_runner.go:130] > # rdt_config_file = ""
	I0807 19:04:47.199359   62561 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0807 19:04:47.199369   62561 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0807 19:04:47.199411   62561 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0807 19:04:47.199423   62561 command_runner.go:130] > # separate_pull_cgroup = ""
	I0807 19:04:47.199433   62561 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0807 19:04:47.199446   62561 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0807 19:04:47.199452   62561 command_runner.go:130] > # will be added.
	I0807 19:04:47.199460   62561 command_runner.go:130] > # default_capabilities = [
	I0807 19:04:47.199466   62561 command_runner.go:130] > # 	"CHOWN",
	I0807 19:04:47.199475   62561 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0807 19:04:47.199484   62561 command_runner.go:130] > # 	"FSETID",
	I0807 19:04:47.199492   62561 command_runner.go:130] > # 	"FOWNER",
	I0807 19:04:47.199501   62561 command_runner.go:130] > # 	"SETGID",
	I0807 19:04:47.199510   62561 command_runner.go:130] > # 	"SETUID",
	I0807 19:04:47.199520   62561 command_runner.go:130] > # 	"SETPCAP",
	I0807 19:04:47.199530   62561 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0807 19:04:47.199539   62561 command_runner.go:130] > # 	"KILL",
	I0807 19:04:47.199549   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199564   62561 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0807 19:04:47.199577   62561 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0807 19:04:47.199587   62561 command_runner.go:130] > # add_inheritable_capabilities = false
	I0807 19:04:47.199598   62561 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0807 19:04:47.199610   62561 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0807 19:04:47.199618   62561 command_runner.go:130] > default_sysctls = [
	I0807 19:04:47.199626   62561 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0807 19:04:47.199634   62561 command_runner.go:130] > ]
	I0807 19:04:47.199642   62561 command_runner.go:130] > # List of devices on the host that a
	I0807 19:04:47.199652   62561 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0807 19:04:47.199659   62561 command_runner.go:130] > # allowed_devices = [
	I0807 19:04:47.199672   62561 command_runner.go:130] > # 	"/dev/fuse",
	I0807 19:04:47.199678   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199685   62561 command_runner.go:130] > # List of additional devices. specified as
	I0807 19:04:47.199696   62561 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0807 19:04:47.199705   62561 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0807 19:04:47.199713   62561 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0807 19:04:47.199723   62561 command_runner.go:130] > # additional_devices = [
	I0807 19:04:47.199728   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199737   62561 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0807 19:04:47.199743   62561 command_runner.go:130] > # cdi_spec_dirs = [
	I0807 19:04:47.199750   62561 command_runner.go:130] > # 	"/etc/cdi",
	I0807 19:04:47.199756   62561 command_runner.go:130] > # 	"/var/run/cdi",
	I0807 19:04:47.199763   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199772   62561 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0807 19:04:47.199784   62561 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0807 19:04:47.199790   62561 command_runner.go:130] > # Defaults to false.
	I0807 19:04:47.199803   62561 command_runner.go:130] > # device_ownership_from_security_context = false
	I0807 19:04:47.199816   62561 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0807 19:04:47.199829   62561 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0807 19:04:47.199838   62561 command_runner.go:130] > # hooks_dir = [
	I0807 19:04:47.199846   62561 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0807 19:04:47.199854   62561 command_runner.go:130] > # ]
	I0807 19:04:47.199863   62561 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0807 19:04:47.199876   62561 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0807 19:04:47.199888   62561 command_runner.go:130] > # its default mounts from the following two files:
	I0807 19:04:47.199894   62561 command_runner.go:130] > #
	I0807 19:04:47.199904   62561 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0807 19:04:47.199918   62561 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0807 19:04:47.199927   62561 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0807 19:04:47.199934   62561 command_runner.go:130] > #
	I0807 19:04:47.199943   62561 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0807 19:04:47.199956   62561 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0807 19:04:47.199968   62561 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0807 19:04:47.199978   62561 command_runner.go:130] > #      only add mounts it finds in this file.
	I0807 19:04:47.199983   62561 command_runner.go:130] > #
	I0807 19:04:47.199990   62561 command_runner.go:130] > # default_mounts_file = ""
	I0807 19:04:47.200006   62561 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0807 19:04:47.200022   62561 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0807 19:04:47.200028   62561 command_runner.go:130] > pids_limit = 1024
	I0807 19:04:47.200038   62561 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0807 19:04:47.200049   62561 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0807 19:04:47.200063   62561 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0807 19:04:47.200079   62561 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0807 19:04:47.200089   62561 command_runner.go:130] > # log_size_max = -1
	I0807 19:04:47.200104   62561 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0807 19:04:47.200115   62561 command_runner.go:130] > # log_to_journald = false
	I0807 19:04:47.200128   62561 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0807 19:04:47.200139   62561 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0807 19:04:47.200151   62561 command_runner.go:130] > # Path to directory for container attach sockets.
	I0807 19:04:47.200167   62561 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0807 19:04:47.200179   62561 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0807 19:04:47.200189   62561 command_runner.go:130] > # bind_mount_prefix = ""
	I0807 19:04:47.200217   62561 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0807 19:04:47.200230   62561 command_runner.go:130] > # read_only = false
	I0807 19:04:47.200242   62561 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0807 19:04:47.200261   62561 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0807 19:04:47.200270   62561 command_runner.go:130] > # live configuration reload.
	I0807 19:04:47.200277   62561 command_runner.go:130] > # log_level = "info"
	I0807 19:04:47.200289   62561 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0807 19:04:47.200300   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.200309   62561 command_runner.go:130] > # log_filter = ""
	I0807 19:04:47.200319   62561 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0807 19:04:47.200332   62561 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0807 19:04:47.200340   62561 command_runner.go:130] > # separated by comma.
	I0807 19:04:47.200352   62561 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0807 19:04:47.200363   62561 command_runner.go:130] > # uid_mappings = ""
	I0807 19:04:47.200372   62561 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0807 19:04:47.200381   62561 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0807 19:04:47.200388   62561 command_runner.go:130] > # separated by comma.
	I0807 19:04:47.200399   62561 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0807 19:04:47.200409   62561 command_runner.go:130] > # gid_mappings = ""
	I0807 19:04:47.200419   62561 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0807 19:04:47.200440   62561 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0807 19:04:47.200454   62561 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0807 19:04:47.200468   62561 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0807 19:04:47.200477   62561 command_runner.go:130] > # minimum_mappable_uid = -1
	I0807 19:04:47.200500   62561 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0807 19:04:47.200516   62561 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0807 19:04:47.200529   62561 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0807 19:04:47.200544   62561 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0807 19:04:47.200554   62561 command_runner.go:130] > # minimum_mappable_gid = -1
	I0807 19:04:47.200565   62561 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0807 19:04:47.200578   62561 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0807 19:04:47.200592   62561 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0807 19:04:47.200601   62561 command_runner.go:130] > # ctr_stop_timeout = 30
	I0807 19:04:47.200610   62561 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0807 19:04:47.200622   62561 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0807 19:04:47.200629   62561 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0807 19:04:47.200639   62561 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0807 19:04:47.200645   62561 command_runner.go:130] > drop_infra_ctr = false
	I0807 19:04:47.200657   62561 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0807 19:04:47.200668   62561 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0807 19:04:47.200681   62561 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0807 19:04:47.200690   62561 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0807 19:04:47.200701   62561 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0807 19:04:47.200716   62561 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0807 19:04:47.200729   62561 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0807 19:04:47.200740   62561 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0807 19:04:47.200751   62561 command_runner.go:130] > # shared_cpuset = ""
	I0807 19:04:47.200764   62561 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0807 19:04:47.200776   62561 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0807 19:04:47.200787   62561 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0807 19:04:47.200797   62561 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0807 19:04:47.200807   62561 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0807 19:04:47.200816   62561 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0807 19:04:47.200830   62561 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0807 19:04:47.200838   62561 command_runner.go:130] > # enable_criu_support = false
	I0807 19:04:47.200846   62561 command_runner.go:130] > # Enable/disable the generation of the container,
	I0807 19:04:47.200862   62561 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0807 19:04:47.200872   62561 command_runner.go:130] > # enable_pod_events = false
	I0807 19:04:47.200882   62561 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0807 19:04:47.200898   62561 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0807 19:04:47.200909   62561 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0807 19:04:47.200915   62561 command_runner.go:130] > # default_runtime = "runc"
	I0807 19:04:47.200927   62561 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0807 19:04:47.200942   62561 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0807 19:04:47.200959   62561 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0807 19:04:47.200970   62561 command_runner.go:130] > # creation as a file is not desired either.
	I0807 19:04:47.200983   62561 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0807 19:04:47.200997   62561 command_runner.go:130] > # the hostname is being managed dynamically.
	I0807 19:04:47.201008   62561 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0807 19:04:47.201012   62561 command_runner.go:130] > # ]
	I0807 19:04:47.201022   62561 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0807 19:04:47.201035   62561 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0807 19:04:47.201047   62561 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0807 19:04:47.201058   62561 command_runner.go:130] > # Each entry in the table should follow the format:
	I0807 19:04:47.201063   62561 command_runner.go:130] > #
	I0807 19:04:47.201073   62561 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0807 19:04:47.201082   62561 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0807 19:04:47.201142   62561 command_runner.go:130] > # runtime_type = "oci"
	I0807 19:04:47.201162   62561 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0807 19:04:47.201173   62561 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0807 19:04:47.201183   62561 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0807 19:04:47.201190   62561 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0807 19:04:47.201199   62561 command_runner.go:130] > # monitor_env = []
	I0807 19:04:47.201208   62561 command_runner.go:130] > # privileged_without_host_devices = false
	I0807 19:04:47.201218   62561 command_runner.go:130] > # allowed_annotations = []
	I0807 19:04:47.201226   62561 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0807 19:04:47.201234   62561 command_runner.go:130] > # Where:
	I0807 19:04:47.201242   62561 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0807 19:04:47.201254   62561 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0807 19:04:47.201264   62561 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0807 19:04:47.201274   62561 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0807 19:04:47.201283   62561 command_runner.go:130] > #   in $PATH.
	I0807 19:04:47.201296   62561 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0807 19:04:47.201304   62561 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0807 19:04:47.201311   62561 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0807 19:04:47.201317   62561 command_runner.go:130] > #   state.
	I0807 19:04:47.201326   62561 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0807 19:04:47.201337   62561 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0807 19:04:47.201350   62561 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0807 19:04:47.201361   62561 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0807 19:04:47.201374   62561 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0807 19:04:47.201388   62561 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0807 19:04:47.201399   62561 command_runner.go:130] > #   The currently recognized values are:
	I0807 19:04:47.201412   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0807 19:04:47.201426   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0807 19:04:47.201434   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0807 19:04:47.201443   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0807 19:04:47.201457   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0807 19:04:47.201467   62561 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0807 19:04:47.201481   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0807 19:04:47.201494   62561 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0807 19:04:47.201505   62561 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0807 19:04:47.201517   62561 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0807 19:04:47.201527   62561 command_runner.go:130] > #   deprecated option "conmon".
	I0807 19:04:47.201539   62561 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0807 19:04:47.201550   62561 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0807 19:04:47.201559   62561 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0807 19:04:47.201566   62561 command_runner.go:130] > #   should be moved to the container's cgroup
	I0807 19:04:47.201572   62561 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0807 19:04:47.201579   62561 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0807 19:04:47.201585   62561 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0807 19:04:47.201592   62561 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0807 19:04:47.201595   62561 command_runner.go:130] > #
	I0807 19:04:47.201600   62561 command_runner.go:130] > # Using the seccomp notifier feature:
	I0807 19:04:47.201605   62561 command_runner.go:130] > #
	I0807 19:04:47.201611   62561 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0807 19:04:47.201619   62561 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0807 19:04:47.201624   62561 command_runner.go:130] > #
	I0807 19:04:47.201636   62561 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0807 19:04:47.201646   62561 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0807 19:04:47.201651   62561 command_runner.go:130] > #
	I0807 19:04:47.201657   62561 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0807 19:04:47.201664   62561 command_runner.go:130] > # feature.
	I0807 19:04:47.201667   62561 command_runner.go:130] > #
	I0807 19:04:47.201675   62561 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0807 19:04:47.201684   62561 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0807 19:04:47.201691   62561 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0807 19:04:47.201699   62561 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0807 19:04:47.201704   62561 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0807 19:04:47.201710   62561 command_runner.go:130] > #
	I0807 19:04:47.201717   62561 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0807 19:04:47.201725   62561 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0807 19:04:47.201732   62561 command_runner.go:130] > #
	I0807 19:04:47.201738   62561 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0807 19:04:47.201746   62561 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0807 19:04:47.201750   62561 command_runner.go:130] > #
	I0807 19:04:47.201756   62561 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0807 19:04:47.201763   62561 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0807 19:04:47.201769   62561 command_runner.go:130] > # limitation.
	I0807 19:04:47.201774   62561 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0807 19:04:47.201780   62561 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0807 19:04:47.201784   62561 command_runner.go:130] > runtime_type = "oci"
	I0807 19:04:47.201791   62561 command_runner.go:130] > runtime_root = "/run/runc"
	I0807 19:04:47.201795   62561 command_runner.go:130] > runtime_config_path = ""
	I0807 19:04:47.201801   62561 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0807 19:04:47.201807   62561 command_runner.go:130] > monitor_cgroup = "pod"
	I0807 19:04:47.201811   62561 command_runner.go:130] > monitor_exec_cgroup = ""
	I0807 19:04:47.201815   62561 command_runner.go:130] > monitor_env = [
	I0807 19:04:47.201823   62561 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0807 19:04:47.201828   62561 command_runner.go:130] > ]
	I0807 19:04:47.201833   62561 command_runner.go:130] > privileged_without_host_devices = false
	I0807 19:04:47.201841   62561 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0807 19:04:47.201846   62561 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0807 19:04:47.201854   62561 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0807 19:04:47.201868   62561 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0807 19:04:47.201877   62561 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0807 19:04:47.201885   62561 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0807 19:04:47.201897   62561 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0807 19:04:47.201904   62561 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0807 19:04:47.201913   62561 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0807 19:04:47.201922   62561 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0807 19:04:47.201928   62561 command_runner.go:130] > # Example:
	I0807 19:04:47.201932   62561 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0807 19:04:47.201937   62561 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0807 19:04:47.201941   62561 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0807 19:04:47.201946   62561 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0807 19:04:47.201949   62561 command_runner.go:130] > # cpuset = 0
	I0807 19:04:47.201952   62561 command_runner.go:130] > # cpushares = "0-1"
	I0807 19:04:47.201956   62561 command_runner.go:130] > # Where:
	I0807 19:04:47.201960   62561 command_runner.go:130] > # The workload name is workload-type.
	I0807 19:04:47.201966   62561 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0807 19:04:47.201971   62561 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0807 19:04:47.201976   62561 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0807 19:04:47.201983   62561 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0807 19:04:47.201988   62561 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0807 19:04:47.201993   62561 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0807 19:04:47.201998   62561 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0807 19:04:47.202002   62561 command_runner.go:130] > # Default value is set to true
	I0807 19:04:47.202009   62561 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0807 19:04:47.202014   62561 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0807 19:04:47.202018   62561 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0807 19:04:47.202022   62561 command_runner.go:130] > # Default value is set to 'false'
	I0807 19:04:47.202026   62561 command_runner.go:130] > # disable_hostport_mapping = false
	I0807 19:04:47.202031   62561 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0807 19:04:47.202034   62561 command_runner.go:130] > #
	I0807 19:04:47.202040   62561 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0807 19:04:47.202045   62561 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0807 19:04:47.202051   62561 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0807 19:04:47.202057   62561 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0807 19:04:47.202061   62561 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0807 19:04:47.202069   62561 command_runner.go:130] > [crio.image]
	I0807 19:04:47.202075   62561 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0807 19:04:47.202078   62561 command_runner.go:130] > # default_transport = "docker://"
	I0807 19:04:47.202083   62561 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0807 19:04:47.202089   62561 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0807 19:04:47.202093   62561 command_runner.go:130] > # global_auth_file = ""
	I0807 19:04:47.202097   62561 command_runner.go:130] > # The image used to instantiate infra containers.
	I0807 19:04:47.202101   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.202105   62561 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0807 19:04:47.202111   62561 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0807 19:04:47.202119   62561 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0807 19:04:47.202126   62561 command_runner.go:130] > # This option supports live configuration reload.
	I0807 19:04:47.202132   62561 command_runner.go:130] > # pause_image_auth_file = ""
	I0807 19:04:47.202138   62561 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0807 19:04:47.202146   62561 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0807 19:04:47.202162   62561 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0807 19:04:47.202169   62561 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0807 19:04:47.202174   62561 command_runner.go:130] > # pause_command = "/pause"
	I0807 19:04:47.202181   62561 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0807 19:04:47.202189   62561 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0807 19:04:47.202194   62561 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0807 19:04:47.202202   62561 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0807 19:04:47.202208   62561 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0807 19:04:47.202216   62561 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0807 19:04:47.202220   62561 command_runner.go:130] > # pinned_images = [
	I0807 19:04:47.202226   62561 command_runner.go:130] > # ]
	I0807 19:04:47.202231   62561 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0807 19:04:47.202239   62561 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0807 19:04:47.202246   62561 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0807 19:04:47.202253   62561 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0807 19:04:47.202261   62561 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0807 19:04:47.202265   62561 command_runner.go:130] > # signature_policy = ""
	I0807 19:04:47.202272   62561 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0807 19:04:47.202278   62561 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0807 19:04:47.202286   62561 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0807 19:04:47.202292   62561 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0807 19:04:47.202306   62561 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0807 19:04:47.202313   62561 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0807 19:04:47.202319   62561 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0807 19:04:47.202327   62561 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0807 19:04:47.202333   62561 command_runner.go:130] > # changing them here.
	I0807 19:04:47.202337   62561 command_runner.go:130] > # insecure_registries = [
	I0807 19:04:47.202342   62561 command_runner.go:130] > # ]
	I0807 19:04:47.202347   62561 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0807 19:04:47.202354   62561 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0807 19:04:47.202359   62561 command_runner.go:130] > # image_volumes = "mkdir"
	I0807 19:04:47.202366   62561 command_runner.go:130] > # Temporary directory to use for storing big files
	I0807 19:04:47.202370   62561 command_runner.go:130] > # big_files_temporary_dir = ""
	I0807 19:04:47.202376   62561 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0807 19:04:47.202382   62561 command_runner.go:130] > # CNI plugins.
	I0807 19:04:47.202386   62561 command_runner.go:130] > [crio.network]
	I0807 19:04:47.202393   62561 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0807 19:04:47.202400   62561 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0807 19:04:47.202406   62561 command_runner.go:130] > # cni_default_network = ""
	I0807 19:04:47.202412   62561 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0807 19:04:47.202418   62561 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0807 19:04:47.202423   62561 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0807 19:04:47.202429   62561 command_runner.go:130] > # plugin_dirs = [
	I0807 19:04:47.202433   62561 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0807 19:04:47.202439   62561 command_runner.go:130] > # ]
	I0807 19:04:47.202444   62561 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0807 19:04:47.202450   62561 command_runner.go:130] > [crio.metrics]
	I0807 19:04:47.202457   62561 command_runner.go:130] > # Globally enable or disable metrics support.
	I0807 19:04:47.202466   62561 command_runner.go:130] > enable_metrics = true
	I0807 19:04:47.202475   62561 command_runner.go:130] > # Specify enabled metrics collectors.
	I0807 19:04:47.202485   62561 command_runner.go:130] > # Per default all metrics are enabled.
	I0807 19:04:47.202497   62561 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0807 19:04:47.202509   62561 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0807 19:04:47.202520   62561 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0807 19:04:47.202529   62561 command_runner.go:130] > # metrics_collectors = [
	I0807 19:04:47.202535   62561 command_runner.go:130] > # 	"operations",
	I0807 19:04:47.202545   62561 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0807 19:04:47.202560   62561 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0807 19:04:47.202567   62561 command_runner.go:130] > # 	"operations_errors",
	I0807 19:04:47.202572   62561 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0807 19:04:47.202578   62561 command_runner.go:130] > # 	"image_pulls_by_name",
	I0807 19:04:47.202582   62561 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0807 19:04:47.202589   62561 command_runner.go:130] > # 	"image_pulls_failures",
	I0807 19:04:47.202593   62561 command_runner.go:130] > # 	"image_pulls_successes",
	I0807 19:04:47.202599   62561 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0807 19:04:47.202603   62561 command_runner.go:130] > # 	"image_layer_reuse",
	I0807 19:04:47.202610   62561 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0807 19:04:47.202614   62561 command_runner.go:130] > # 	"containers_oom_total",
	I0807 19:04:47.202620   62561 command_runner.go:130] > # 	"containers_oom",
	I0807 19:04:47.202624   62561 command_runner.go:130] > # 	"processes_defunct",
	I0807 19:04:47.202630   62561 command_runner.go:130] > # 	"operations_total",
	I0807 19:04:47.202634   62561 command_runner.go:130] > # 	"operations_latency_seconds",
	I0807 19:04:47.202641   62561 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0807 19:04:47.202645   62561 command_runner.go:130] > # 	"operations_errors_total",
	I0807 19:04:47.202651   62561 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0807 19:04:47.202655   62561 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0807 19:04:47.202662   62561 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0807 19:04:47.202666   62561 command_runner.go:130] > # 	"image_pulls_success_total",
	I0807 19:04:47.202672   62561 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0807 19:04:47.202676   62561 command_runner.go:130] > # 	"containers_oom_count_total",
	I0807 19:04:47.202687   62561 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0807 19:04:47.202691   62561 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0807 19:04:47.202697   62561 command_runner.go:130] > # ]
	I0807 19:04:47.202702   62561 command_runner.go:130] > # The port on which the metrics server will listen.
	I0807 19:04:47.202709   62561 command_runner.go:130] > # metrics_port = 9090
	I0807 19:04:47.202713   62561 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0807 19:04:47.202719   62561 command_runner.go:130] > # metrics_socket = ""
	I0807 19:04:47.202724   62561 command_runner.go:130] > # The certificate for the secure metrics server.
	I0807 19:04:47.202732   62561 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0807 19:04:47.202740   62561 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0807 19:04:47.202744   62561 command_runner.go:130] > # certificate on any modification event.
	I0807 19:04:47.202751   62561 command_runner.go:130] > # metrics_cert = ""
	I0807 19:04:47.202757   62561 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0807 19:04:47.202769   62561 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0807 19:04:47.202775   62561 command_runner.go:130] > # metrics_key = ""
	I0807 19:04:47.202781   62561 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0807 19:04:47.202788   62561 command_runner.go:130] > [crio.tracing]
	I0807 19:04:47.202793   62561 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0807 19:04:47.202799   62561 command_runner.go:130] > # enable_tracing = false
	I0807 19:04:47.202804   62561 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0807 19:04:47.202811   62561 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0807 19:04:47.202817   62561 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0807 19:04:47.202824   62561 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0807 19:04:47.202828   62561 command_runner.go:130] > # CRI-O NRI configuration.
	I0807 19:04:47.202834   62561 command_runner.go:130] > [crio.nri]
	I0807 19:04:47.202838   62561 command_runner.go:130] > # Globally enable or disable NRI.
	I0807 19:04:47.202844   62561 command_runner.go:130] > # enable_nri = false
	I0807 19:04:47.202848   62561 command_runner.go:130] > # NRI socket to listen on.
	I0807 19:04:47.202853   62561 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0807 19:04:47.202857   62561 command_runner.go:130] > # NRI plugin directory to use.
	I0807 19:04:47.202864   62561 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0807 19:04:47.202869   62561 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0807 19:04:47.202875   62561 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0807 19:04:47.202880   62561 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0807 19:04:47.202886   62561 command_runner.go:130] > # nri_disable_connections = false
	I0807 19:04:47.202891   62561 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0807 19:04:47.202896   62561 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0807 19:04:47.202902   62561 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0807 19:04:47.202909   62561 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0807 19:04:47.202920   62561 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0807 19:04:47.202928   62561 command_runner.go:130] > [crio.stats]
	I0807 19:04:47.202937   62561 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0807 19:04:47.202947   62561 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0807 19:04:47.203034   62561 command_runner.go:130] > # stats_collection_period = 0
	I0807 19:04:47.203079   62561 command_runner.go:130] ! time="2024-08-07 19:04:47.159931880Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0807 19:04:47.203102   62561 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0807 19:04:47.203228   62561 cni.go:84] Creating CNI manager for ""
	I0807 19:04:47.203239   62561 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0807 19:04:47.203251   62561 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:04:47.203278   62561 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-334028 NodeName:multinode-334028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 19:04:47.203445   62561 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-334028"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:04:47.203517   62561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 19:04:47.215299   62561 command_runner.go:130] > kubeadm
	I0807 19:04:47.215317   62561 command_runner.go:130] > kubectl
	I0807 19:04:47.215324   62561 command_runner.go:130] > kubelet
	I0807 19:04:47.215345   62561 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:04:47.215393   62561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:04:47.226675   62561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0807 19:04:47.246302   62561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:04:47.265984   62561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0807 19:04:47.283769   62561 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I0807 19:04:47.287840   62561 command_runner.go:130] > 192.168.39.165	control-plane.minikube.internal
	I0807 19:04:47.287945   62561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:04:47.431245   62561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:04:47.446625   62561 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028 for IP: 192.168.39.165
	I0807 19:04:47.446646   62561 certs.go:194] generating shared ca certs ...
	I0807 19:04:47.446673   62561 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:04:47.446833   62561 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 19:04:47.446870   62561 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 19:04:47.446879   62561 certs.go:256] generating profile certs ...
	I0807 19:04:47.446952   62561 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/client.key
	I0807 19:04:47.447015   62561 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.key.dfb147c6
	I0807 19:04:47.447051   62561 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.key
	I0807 19:04:47.447062   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0807 19:04:47.447076   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0807 19:04:47.447092   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0807 19:04:47.447105   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0807 19:04:47.447117   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0807 19:04:47.447131   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0807 19:04:47.447143   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0807 19:04:47.447156   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0807 19:04:47.447210   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 19:04:47.447236   62561 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 19:04:47.447245   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 19:04:47.447267   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:04:47.447289   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:04:47.447313   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 19:04:47.447349   62561 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:04:47.447379   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.447392   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem -> /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.447405   62561 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.447970   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:04:47.473336   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:04:47.497296   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:04:47.520492   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 19:04:47.544634   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0807 19:04:47.568080   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 19:04:47.592467   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:04:47.615695   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/multinode-334028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 19:04:47.638317   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:04:47.662548   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 19:04:47.685553   62561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 19:04:47.708890   62561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:04:47.729771   62561 ssh_runner.go:195] Run: openssl version
	I0807 19:04:47.738103   62561 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0807 19:04:47.738178   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:04:47.761732   62561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.767669   62561 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.767875   62561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.767928   62561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:04:47.777615   62561 command_runner.go:130] > b5213941
	I0807 19:04:47.777940   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:04:47.798281   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 19:04:47.842552   62561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.849385   62561 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.849619   62561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.849710   62561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 19:04:47.857418   62561 command_runner.go:130] > 51391683
	I0807 19:04:47.857764   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 19:04:47.879364   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 19:04:47.895522   62561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.900920   62561 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.901230   62561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.901283   62561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 19:04:47.907038   62561 command_runner.go:130] > 3ec20f2e
	I0807 19:04:47.907096   62561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:04:47.917943   62561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:04:47.927511   62561 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:04:47.927535   62561 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0807 19:04:47.927544   62561 command_runner.go:130] > Device: 253,1	Inode: 2103851     Links: 1
	I0807 19:04:47.927554   62561 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0807 19:04:47.927563   62561 command_runner.go:130] > Access: 2024-08-07 18:57:46.498707647 +0000
	I0807 19:04:47.927571   62561 command_runner.go:130] > Modify: 2024-08-07 18:57:46.498707647 +0000
	I0807 19:04:47.927577   62561 command_runner.go:130] > Change: 2024-08-07 18:57:46.498707647 +0000
	I0807 19:04:47.927582   62561 command_runner.go:130] >  Birth: 2024-08-07 18:57:46.498707647 +0000
	I0807 19:04:47.927638   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 19:04:47.933367   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.933546   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 19:04:47.939988   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.940102   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 19:04:47.945468   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.945792   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 19:04:47.951364   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.951743   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 19:04:47.963070   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.963213   62561 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 19:04:47.973228   62561 command_runner.go:130] > Certificate will not expire
	I0807 19:04:47.975172   62561 kubeadm.go:392] StartCluster: {Name:multinode-334028 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-334028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.119 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:04:47.975281   62561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 19:04:47.975368   62561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:04:48.024401   62561 command_runner.go:130] > ce4db7b426abe87403fa89c3fd94af24bdf03aa9c79808989468d4bd13c2a7bc
	I0807 19:04:48.024441   62561 command_runner.go:130] > a84394919dc587c6edc597087ad59d26dc822c41709e10ce4f6c1e487fe223e4
	I0807 19:04:48.024453   62561 command_runner.go:130] > 9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d
	I0807 19:04:48.024461   62561 command_runner.go:130] > 2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b
	I0807 19:04:48.024469   62561 command_runner.go:130] > 6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79
	I0807 19:04:48.024477   62561 command_runner.go:130] > ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88
	I0807 19:04:48.024487   62561 command_runner.go:130] > cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a
	I0807 19:04:48.024504   62561 command_runner.go:130] > da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072
	I0807 19:04:48.024531   62561 cri.go:89] found id: "ce4db7b426abe87403fa89c3fd94af24bdf03aa9c79808989468d4bd13c2a7bc"
	I0807 19:04:48.024545   62561 cri.go:89] found id: "a84394919dc587c6edc597087ad59d26dc822c41709e10ce4f6c1e487fe223e4"
	I0807 19:04:48.024551   62561 cri.go:89] found id: "9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d"
	I0807 19:04:48.024559   62561 cri.go:89] found id: "2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b"
	I0807 19:04:48.024569   62561 cri.go:89] found id: "6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79"
	I0807 19:04:48.024584   62561 cri.go:89] found id: "ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88"
	I0807 19:04:48.024596   62561 cri.go:89] found id: "cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a"
	I0807 19:04:48.024602   62561 cri.go:89] found id: "da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072"
	I0807 19:04:48.024609   62561 cri.go:89] found id: ""
	I0807 19:04:48.024662   62561 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.529150663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723057743529124834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aca73013-1a24-4b94-ad8d-758e2c990a52 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.529911205Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1217504f-478c-4e44-b26e-471d9ed14501 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.530025538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1217504f-478c-4e44-b26e-471d9ed14501 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.530439641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec28cb619c0f11474d5c737ac8d59e80fd74eb9d1f170c55e198ccb31c8e6dd4,PodSandboxId:ee248a82a815e2529220d4353b7b01dd2cac6cc0f8c795df27fbf4f8f4613dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723057526737176862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723057501048373564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef668e9a68b275e5750bfb506e86936f065f112ce146c7fba5c1a4d3abfc5b,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723057501027850906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815
e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2,PodSandboxId:351d8ec6860adcea67c5dec40ec1b3411bc31e02f94dbb0e88ab99cdc3c348f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723057493430233564,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},An
notations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906,PodSandboxId:3c1de91fb727de3ce09d2044755dc707115348edfa7c3390f8a9701028e54da4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723057493277196698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb27c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 6e364a1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8,PodSandboxId:f280116a6f48237c8d805cef00a1416669120c1971e46bd5e7e6629ed3c0b619,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723057493239808034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823,PodSandboxId:8ecd971a019aef84780fb101395aa787328d7fd9d579aa15ced6ae19fa178c75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723057493213662449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723057493155383760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef,PodSandboxId:3446b0b9fcd3086a06804406d19e49f5c3edae56e7d5286aded4e41c0d02e2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723057493119128629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da,PodSandboxId:1e0b756c4036d303eb26b561c93c864e2b587688f92f3c18ed396698d68d7a82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723057493111783614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723057487928242796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70642e6a4a0e3d3bb4c6c8ba0524c80afd941db7d785cbdab5d76a67e5973fb4,PodSandboxId:3bcd9b98a301476a52c16754cbdd97be02c30e93c65c9e571d97fd013fdd5eee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723057164229082442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.
pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d,PodSandboxId:75585ea11a7b4e29d40d04142581a3b3aa8dd82b920ff009295e19a4e89aa320,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723057093620547600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},Annotations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b,PodSandboxId:39903e5997b32339af4402248ac0563dce6772113a5e3d1afbe31d4bede2d089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723057091143851798,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88,PodSandboxId:62d19a8b6aa97a047c6466d44dc3b32dac61b1650c711ae60bb79381f59477a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723057070480292031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a,PodSandboxId:dbac8324051a45017d4484dba1af98fadaaf5cae6bb03a1cea0716cdd3572257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723057070449024510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb2
7c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.container.hash: 6e364a1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79,PodSandboxId:ed9e2d85fd55e658a19020434445939e6bd072299b893f1cf64e606f108b60ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723057070486119823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072,PodSandboxId:3eebdfe2361ee914736bca18fd7dc45373dbc9087b280c1ebabbb55037a08818,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723057070435864290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1217504f-478c-4e44-b26e-471d9ed14501 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.580748374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a41a42b-c14b-4ed2-876c-a4b2d43fd951 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.580821728Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a41a42b-c14b-4ed2-876c-a4b2d43fd951 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.582425641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73659804-fcd7-40da-ab10-378a500e7650 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.582829505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723057743582807631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73659804-fcd7-40da-ab10-378a500e7650 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.583437582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=933f054a-4bcd-4c44-b474-ead42403daab name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.583489674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=933f054a-4bcd-4c44-b474-ead42403daab name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.583874753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec28cb619c0f11474d5c737ac8d59e80fd74eb9d1f170c55e198ccb31c8e6dd4,PodSandboxId:ee248a82a815e2529220d4353b7b01dd2cac6cc0f8c795df27fbf4f8f4613dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723057526737176862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723057501048373564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef668e9a68b275e5750bfb506e86936f065f112ce146c7fba5c1a4d3abfc5b,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723057501027850906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815
e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2,PodSandboxId:351d8ec6860adcea67c5dec40ec1b3411bc31e02f94dbb0e88ab99cdc3c348f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723057493430233564,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},An
notations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906,PodSandboxId:3c1de91fb727de3ce09d2044755dc707115348edfa7c3390f8a9701028e54da4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723057493277196698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb27c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 6e364a1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8,PodSandboxId:f280116a6f48237c8d805cef00a1416669120c1971e46bd5e7e6629ed3c0b619,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723057493239808034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823,PodSandboxId:8ecd971a019aef84780fb101395aa787328d7fd9d579aa15ced6ae19fa178c75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723057493213662449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723057493155383760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef,PodSandboxId:3446b0b9fcd3086a06804406d19e49f5c3edae56e7d5286aded4e41c0d02e2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723057493119128629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da,PodSandboxId:1e0b756c4036d303eb26b561c93c864e2b587688f92f3c18ed396698d68d7a82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723057493111783614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723057487928242796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70642e6a4a0e3d3bb4c6c8ba0524c80afd941db7d785cbdab5d76a67e5973fb4,PodSandboxId:3bcd9b98a301476a52c16754cbdd97be02c30e93c65c9e571d97fd013fdd5eee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723057164229082442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.
pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d,PodSandboxId:75585ea11a7b4e29d40d04142581a3b3aa8dd82b920ff009295e19a4e89aa320,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723057093620547600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},Annotations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b,PodSandboxId:39903e5997b32339af4402248ac0563dce6772113a5e3d1afbe31d4bede2d089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723057091143851798,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88,PodSandboxId:62d19a8b6aa97a047c6466d44dc3b32dac61b1650c711ae60bb79381f59477a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723057070480292031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a,PodSandboxId:dbac8324051a45017d4484dba1af98fadaaf5cae6bb03a1cea0716cdd3572257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723057070449024510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb2
7c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.container.hash: 6e364a1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79,PodSandboxId:ed9e2d85fd55e658a19020434445939e6bd072299b893f1cf64e606f108b60ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723057070486119823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072,PodSandboxId:3eebdfe2361ee914736bca18fd7dc45373dbc9087b280c1ebabbb55037a08818,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723057070435864290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=933f054a-4bcd-4c44-b474-ead42403daab name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.631505344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49178fc6-2a5d-4704-863f-8b2803d18d5c name=/runtime.v1.RuntimeService/Version
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.631584600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49178fc6-2a5d-4704-863f-8b2803d18d5c name=/runtime.v1.RuntimeService/Version
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.633279876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6de21a51-1ab8-4536-a69a-65e915500002 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.633756164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723057743633731133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6de21a51-1ab8-4536-a69a-65e915500002 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.634493849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf0cf783-3397-4dd8-af97-9b6871bc521b name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.634572250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf0cf783-3397-4dd8-af97-9b6871bc521b name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.635115870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec28cb619c0f11474d5c737ac8d59e80fd74eb9d1f170c55e198ccb31c8e6dd4,PodSandboxId:ee248a82a815e2529220d4353b7b01dd2cac6cc0f8c795df27fbf4f8f4613dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723057526737176862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723057501048373564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef668e9a68b275e5750bfb506e86936f065f112ce146c7fba5c1a4d3abfc5b,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723057501027850906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815
e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2,PodSandboxId:351d8ec6860adcea67c5dec40ec1b3411bc31e02f94dbb0e88ab99cdc3c348f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723057493430233564,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},An
notations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906,PodSandboxId:3c1de91fb727de3ce09d2044755dc707115348edfa7c3390f8a9701028e54da4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723057493277196698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb27c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 6e364a1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8,PodSandboxId:f280116a6f48237c8d805cef00a1416669120c1971e46bd5e7e6629ed3c0b619,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723057493239808034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823,PodSandboxId:8ecd971a019aef84780fb101395aa787328d7fd9d579aa15ced6ae19fa178c75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723057493213662449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723057493155383760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef,PodSandboxId:3446b0b9fcd3086a06804406d19e49f5c3edae56e7d5286aded4e41c0d02e2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723057493119128629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da,PodSandboxId:1e0b756c4036d303eb26b561c93c864e2b587688f92f3c18ed396698d68d7a82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723057493111783614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723057487928242796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70642e6a4a0e3d3bb4c6c8ba0524c80afd941db7d785cbdab5d76a67e5973fb4,PodSandboxId:3bcd9b98a301476a52c16754cbdd97be02c30e93c65c9e571d97fd013fdd5eee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723057164229082442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.
pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d,PodSandboxId:75585ea11a7b4e29d40d04142581a3b3aa8dd82b920ff009295e19a4e89aa320,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723057093620547600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},Annotations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b,PodSandboxId:39903e5997b32339af4402248ac0563dce6772113a5e3d1afbe31d4bede2d089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723057091143851798,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88,PodSandboxId:62d19a8b6aa97a047c6466d44dc3b32dac61b1650c711ae60bb79381f59477a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723057070480292031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a,PodSandboxId:dbac8324051a45017d4484dba1af98fadaaf5cae6bb03a1cea0716cdd3572257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723057070449024510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb2
7c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.container.hash: 6e364a1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79,PodSandboxId:ed9e2d85fd55e658a19020434445939e6bd072299b893f1cf64e606f108b60ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723057070486119823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072,PodSandboxId:3eebdfe2361ee914736bca18fd7dc45373dbc9087b280c1ebabbb55037a08818,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723057070435864290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf0cf783-3397-4dd8-af97-9b6871bc521b name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.679216276Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21818a51-8265-43cf-a596-ae844c5671ab name=/runtime.v1.RuntimeService/Version
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.679319794Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21818a51-8265-43cf-a596-ae844c5671ab name=/runtime.v1.RuntimeService/Version
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.680430894Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=144cf9df-c246-4a0a-8112-b04ccbe30d9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.681121368Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723057743681092371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=144cf9df-c246-4a0a-8112-b04ccbe30d9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.681866454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a5b8623-2134-427a-8278-36570a66e6d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.682069924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a5b8623-2134-427a-8278-36570a66e6d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:09:03 multinode-334028 crio[2908]: time="2024-08-07 19:09:03.683601962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ec28cb619c0f11474d5c737ac8d59e80fd74eb9d1f170c55e198ccb31c8e6dd4,PodSandboxId:ee248a82a815e2529220d4353b7b01dd2cac6cc0f8c795df27fbf4f8f4613dd9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723057526737176862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723057501048373564,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58ef668e9a68b275e5750bfb506e86936f065f112ce146c7fba5c1a4d3abfc5b,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723057501027850906,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815
e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2,PodSandboxId:351d8ec6860adcea67c5dec40ec1b3411bc31e02f94dbb0e88ab99cdc3c348f5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1723057493430233564,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},An
notations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906,PodSandboxId:3c1de91fb727de3ce09d2044755dc707115348edfa7c3390f8a9701028e54da4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723057493277196698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb27c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 6e364a1b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8,PodSandboxId:f280116a6f48237c8d805cef00a1416669120c1971e46bd5e7e6629ed3c0b619,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723057493239808034,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823,PodSandboxId:8ecd971a019aef84780fb101395aa787328d7fd9d579aa15ced6ae19fa178c75,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723057493213662449,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb491
8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0,PodSandboxId:8e26a2721be9dae43f29caccc1a94c56ff3f19844e9a5ad9e37cf75803eaf47f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723057493155383760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a3815e-97c5-48d7-8e76-4f0052a40096,},Annotations:map[string]string{io.kubernetes.container.hash: 52d79312,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef,PodSandboxId:3446b0b9fcd3086a06804406d19e49f5c3edae56e7d5286aded4e41c0d02e2a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723057493119128629,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da,PodSandboxId:1e0b756c4036d303eb26b561c93c864e2b587688f92f3c18ed396698d68d7a82,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723057493111783614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99,PodSandboxId:108e36891126b3d31acd05cf6522d6977eb849491541ffa67a53934d49981ef9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723057487928242796,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-582vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ee2fbc-330a-483e-9cb6-8eccc781a058,},Annotations:map[string]string{io.kubernetes.container.hash: 25d56c02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53
,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70642e6a4a0e3d3bb4c6c8ba0524c80afd941db7d785cbdab5d76a67e5973fb4,PodSandboxId:3bcd9b98a301476a52c16754cbdd97be02c30e93c65c9e571d97fd013fdd5eee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723057164229082442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v64x9,io.kubernetes.
pod.namespace: default,io.kubernetes.pod.uid: 740fe38b-1d09-4860-98d8-d1b7bbec0b6f,},Annotations:map[string]string{io.kubernetes.container.hash: 15af0190,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d,PodSandboxId:75585ea11a7b4e29d40d04142581a3b3aa8dd82b920ff009295e19a4e89aa320,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1723057093620547600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rwth9,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 0fc3b94f-0c9c-4a86-8229-cc904a5e844a,},Annotations:map[string]string{io.kubernetes.container.hash: b4b1d9cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b,PodSandboxId:39903e5997b32339af4402248ac0563dce6772113a5e3d1afbe31d4bede2d089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723057091143851798,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8zvz,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: b7f66bee-c532-4132-87a4-d40f6cc2b888,},Annotations:map[string]string{io.kubernetes.container.hash: c7feaa56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88,PodSandboxId:62d19a8b6aa97a047c6466d44dc3b32dac61b1650c711ae60bb79381f59477a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723057070480292031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-334028,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: c4217bfac3db5a54109f1d3204e1a41c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a,PodSandboxId:dbac8324051a45017d4484dba1af98fadaaf5cae6bb03a1cea0716cdd3572257,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723057070449024510,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff138ea9e8890a2fb2
7c64fcd2f5fc58,},Annotations:map[string]string{io.kubernetes.container.hash: 6e364a1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79,PodSandboxId:ed9e2d85fd55e658a19020434445939e6bd072299b893f1cf64e606f108b60ff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723057070486119823,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 095ea6a904ea01c7452eb8221d56b014,
},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072,PodSandboxId:3eebdfe2361ee914736bca18fd7dc45373dbc9087b280c1ebabbb55037a08818,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723057070435864290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-334028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 680c9177967713d371e8b271246a9ccd,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 11713227,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a5b8623-2134-427a-8278-36570a66e6d6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ec28cb619c0f1       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   ee248a82a815e       busybox-fc5497c4f-v64x9
	7ec38ea59ce30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   108e36891126b       coredns-7db6d8ff4d-582vz
	58ef668e9a68b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       2                   8e26a2721be9d       storage-provisioner
	61840be20cf15       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      4 minutes ago       Running             kindnet-cni               1                   351d8ec6860ad       kindnet-rwth9
	bceb3a268779b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   3c1de91fb727d       etcd-multinode-334028
	5a6f5ef6794eb       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   f280116a6f482       kube-proxy-l8zvz
	c02a1136327b0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   8ecd971a019ae       kube-controller-manager-multinode-334028
	dd95ca599aa17       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       1                   8e26a2721be9d       storage-provisioner
	c2a6a484bfabc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   3446b0b9fcd30       kube-apiserver-multinode-334028
	76c8635b399f6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   1e0b756c4036d       kube-scheduler-multinode-334028
	b6712261191c5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   108e36891126b       coredns-7db6d8ff4d-582vz
	70642e6a4a0e3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   3bcd9b98a3014       busybox-fc5497c4f-v64x9
	9e1010d7bf2b3       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    10 minutes ago      Exited              kindnet-cni               0                   75585ea11a7b4       kindnet-rwth9
	2ca940561e18e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   39903e5997b32       kube-proxy-l8zvz
	6da107968aee7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   ed9e2d85fd55e       kube-scheduler-multinode-334028
	ffc63a732f6bf       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   62d19a8b6aa97       kube-controller-manager-multinode-334028
	cf1948299290c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   dbac8324051a4       etcd-multinode-334028
	da12cb48b4b16       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   3eebdfe2361ee       kube-apiserver-multinode-334028
	
	
	==> coredns [7ec38ea59ce3095159c6f914ba4e79b1e7c4cbb904ce99cbe8fbc526e0e4be17] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40959 - 43980 "HINFO IN 2210918481587173305.2722027126383920797. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014744373s
	
	
	==> coredns [b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:40124 - 32074 "HINFO IN 183290254663183692.2361621144747932340. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021470383s
	
	
	==> describe nodes <==
	Name:               multinode-334028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-334028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T18_57_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 18:57:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334028
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:08:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:04:59 +0000   Wed, 07 Aug 2024 18:57:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:04:59 +0000   Wed, 07 Aug 2024 18:57:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:04:59 +0000   Wed, 07 Aug 2024 18:57:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:04:59 +0000   Wed, 07 Aug 2024 18:58:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    multinode-334028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 71b24a9feed2442eb9e04eb78076e9c1
	  System UUID:                71b24a9f-eed2-442e-b9e0-4eb78076e9c1
	  Boot ID:                    bf99b756-3ae4-48e4-9741-9e9664912a97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v64x9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m44s
	  kube-system                 coredns-7db6d8ff4d-582vz                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-334028                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-rwth9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-334028             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-334028    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-l8zvz                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-334028             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m6s   kube-proxy       
	  Normal   Starting                 10m    kube-proxy       
	  Normal   Starting                 11m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m    kubelet          Node multinode-334028 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m    kubelet          Node multinode-334028 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m    kubelet          Node multinode-334028 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m    node-controller  Node multinode-334028 event: Registered Node multinode-334028 in Controller
	  Normal   NodeReady                10m    kubelet          Node multinode-334028 status is now: NodeReady
	  Warning  ContainerGCFailed        5m9s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 4m5s   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  4m5s   kubelet          Node multinode-334028 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m5s   kubelet          Node multinode-334028 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m5s   kubelet          Node multinode-334028 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m54s  node-controller  Node multinode-334028 event: Registered Node multinode-334028 in Controller
	
	
	Name:               multinode-334028-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-334028-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=multinode-334028
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_07T19_05_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:05:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-334028-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:06:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 07 Aug 2024 19:06:08 +0000   Wed, 07 Aug 2024 19:07:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 07 Aug 2024 19:06:08 +0000   Wed, 07 Aug 2024 19:07:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 07 Aug 2024 19:06:08 +0000   Wed, 07 Aug 2024 19:07:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 07 Aug 2024 19:06:08 +0000   Wed, 07 Aug 2024 19:07:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    multinode-334028-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5eaf29ba1bed4de0b392b62bc360b7ce
	  System UUID:                5eaf29ba-1bed-4de0-b392-b62bc360b7ce
	  Boot ID:                    76414706-d5c2-47ff-9914-b2ce188f20d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qq6w4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 kindnet-rdhb6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-fpwg7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-334028-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-334028-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-334028-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m46s                  kubelet          Node multinode-334028-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node multinode-334028-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node multinode-334028-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node multinode-334028-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m7s                   kubelet          Node multinode-334028-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-334028-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058048] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.172648] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.144592] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.276303] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.158300] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +4.387732] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.061197] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.989866] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.076858] kauditd_printk_skb: 69 callbacks suppressed
	[Aug 7 18:58] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.130379] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.486439] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 7 18:59] kauditd_printk_skb: 14 callbacks suppressed
	[Aug 7 19:04] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[  +0.154516] systemd-fstab-generator[2840]: Ignoring "noauto" option for root device
	[  +0.171748] systemd-fstab-generator[2854]: Ignoring "noauto" option for root device
	[  +0.142717] systemd-fstab-generator[2867]: Ignoring "noauto" option for root device
	[  +0.290406] systemd-fstab-generator[2895]: Ignoring "noauto" option for root device
	[  +1.002178] systemd-fstab-generator[2994]: Ignoring "noauto" option for root device
	[  +5.570393] kauditd_printk_skb: 132 callbacks suppressed
	[  +6.576835] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.095726] kauditd_printk_skb: 64 callbacks suppressed
	[Aug 7 19:05] kauditd_printk_skb: 24 callbacks suppressed
	[  +3.198936] systemd-fstab-generator[4092]: Ignoring "noauto" option for root device
	[ +13.250081] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [bceb3a268779bcef5f7caf633a0fe0dbaf4124c59d83f87b5e392a6180c14906] <==
	{"level":"info","ts":"2024-08-07T19:04:54.045359Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:04:54.054452Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:04:54.054503Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:04:54.054512Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:04:54.058524Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-07T19:04:54.074611Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ffc3b7517aaad9f6","initial-advertise-peer-urls":["https://192.168.39.165:2380"],"listen-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T19:04:54.074667Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T19:04:54.074708Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-08-07T19:04:54.074714Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-08-07T19:04:55.794281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-07T19:04:55.794326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-07T19:04:55.794368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgPreVoteResp from ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2024-08-07T19:04:55.794385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became candidate at term 3"}
	{"level":"info","ts":"2024-08-07T19:04:55.794391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgVoteResp from ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-08-07T19:04:55.794399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became leader at term 3"}
	{"level":"info","ts":"2024-08-07T19:04:55.794415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ffc3b7517aaad9f6 elected leader ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-08-07T19:04:55.801141Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ffc3b7517aaad9f6","local-member-attributes":"{Name:multinode-334028 ClientURLs:[https://192.168.39.165:2379]}","request-path":"/0/members/ffc3b7517aaad9f6/attributes","cluster-id":"58f0a6b9f17e1f60","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T19:04:55.801174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:04:55.801173Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:04:55.801451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T19:04:55.801463Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T19:04:55.803179Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2024-08-07T19:04:55.804164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-07T19:06:25.203744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.433661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-334028-m03\" ","response":"range_response_count:1 size:3117"}
	{"level":"info","ts":"2024-08-07T19:06:25.20418Z","caller":"traceutil/trace.go:171","msg":"trace[712435443] range","detail":"{range_begin:/registry/minions/multinode-334028-m03; range_end:; response_count:1; response_revision:1235; }","duration":"158.912325ms","start":"2024-08-07T19:06:25.045212Z","end":"2024-08-07T19:06:25.204124Z","steps":["trace[712435443] 'range keys from in-memory index tree'  (duration: 157.089362ms)"],"step_count":1}
	
	
	==> etcd [cf1948299290ce4f29ccb55e4d0bf2476a9af592592762e56cf1ffff55f0de6a] <==
	{"level":"info","ts":"2024-08-07T18:57:51.092601Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:57:51.092637Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T18:57:51.100998Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T18:57:51.101033Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-07T18:58:57.792232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.910092ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705900378134616216 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:59f6912e34631897>","response":"size:41"}
	{"level":"info","ts":"2024-08-07T18:58:57.792528Z","caller":"traceutil/trace.go:171","msg":"trace[2089730293] linearizableReadLoop","detail":"{readStateIndex:484; appliedIndex:482; }","duration":"144.556547ms","start":"2024-08-07T18:58:57.647939Z","end":"2024-08-07T18:58:57.792496Z","steps":["trace[2089730293] 'read index received'  (duration: 143.830981ms)","trace[2089730293] 'applied index is now lower than readState.Index'  (duration: 724.933µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T18:58:57.793077Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.112393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-334028-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-08-07T18:58:57.793197Z","caller":"traceutil/trace.go:171","msg":"trace[2017032617] range","detail":"{range_begin:/registry/minions/multinode-334028-m02; range_end:; response_count:1; response_revision:460; }","duration":"145.263142ms","start":"2024-08-07T18:58:57.647916Z","end":"2024-08-07T18:58:57.793179Z","steps":["trace[2017032617] 'agreement among raft nodes before linearized reading'  (duration: 144.704573ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T18:58:57.793405Z","caller":"traceutil/trace.go:171","msg":"trace[1806786512] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"166.140865ms","start":"2024-08-07T18:58:57.627253Z","end":"2024-08-07T18:58:57.793393Z","steps":["trace[1806786512] 'process raft request'  (duration: 165.147025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-07T19:00:00.018481Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.577582ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15705900378134616706 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:59f6912e34631a81>","response":"size:41"}
	{"level":"info","ts":"2024-08-07T19:00:00.019122Z","caller":"traceutil/trace.go:171","msg":"trace[1328949118] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"175.541991ms","start":"2024-08-07T18:59:59.843546Z","end":"2024-08-07T19:00:00.019088Z","steps":["trace[1328949118] 'process raft request'  (duration: 175.367737ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T19:00:00.019312Z","caller":"traceutil/trace.go:171","msg":"trace[1262592073] linearizableReadLoop","detail":"{readStateIndex:644; appliedIndex:643; }","duration":"240.646893ms","start":"2024-08-07T18:59:59.778652Z","end":"2024-08-07T19:00:00.019299Z","steps":["trace[1262592073] 'read index received'  (duration: 77.260184ms)","trace[1262592073] 'applied index is now lower than readState.Index'  (duration: 163.386138ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-07T19:00:00.019467Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.804335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-334028-m03\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-08-07T19:00:00.01951Z","caller":"traceutil/trace.go:171","msg":"trace[64142701] range","detail":"{range_begin:/registry/minions/multinode-334028-m03; range_end:; response_count:1; response_revision:602; }","duration":"240.875375ms","start":"2024-08-07T18:59:59.778628Z","end":"2024-08-07T19:00:00.019503Z","steps":["trace[64142701] 'agreement among raft nodes before linearized reading'  (duration: 240.766795ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T19:00:52.849683Z","caller":"traceutil/trace.go:171","msg":"trace[356690587] transaction","detail":"{read_only:false; response_revision:728; number_of_response:1; }","duration":"106.162121ms","start":"2024-08-07T19:00:52.743488Z","end":"2024-08-07T19:00:52.84965Z","steps":["trace[356690587] 'process raft request'  (duration: 106.017597ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-07T19:03:14.19646Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-07T19:03:14.196579Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-334028","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	{"level":"warn","ts":"2024-08-07T19:03:14.196687Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:03:14.196805Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:03:14.237259Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:03:14.237316Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-07T19:03:14.237375Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"ffc3b7517aaad9f6"}
	{"level":"info","ts":"2024-08-07T19:03:14.243198Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-08-07T19:03:14.243367Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-08-07T19:03:14.24338Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-334028","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	
	
	==> kernel <==
	 19:09:04 up 11 min,  0 users,  load average: 0.24, 0.20, 0.12
	Linux multinode-334028 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [61840be20cf15d164f210d80ff7e5ff3ff0261794d682f9af01a1e95c71680a2] <==
	I0807 19:07:54.438508       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:08:04.443508       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:08:04.443572       1 main.go:299] handling current node
	I0807 19:08:04.443594       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:08:04.443600       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:08:14.439461       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:08:14.439566       1 main.go:299] handling current node
	I0807 19:08:14.439596       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:08:14.439611       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:08:24.437277       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:08:24.437380       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:08:24.437530       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:08:24.437536       1 main.go:299] handling current node
	I0807 19:08:34.444163       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:08:34.444268       1 main.go:299] handling current node
	I0807 19:08:34.444307       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:08:34.444313       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:08:44.439306       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:08:44.439378       1 main.go:299] handling current node
	I0807 19:08:44.439406       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:08:44.439412       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:08:54.438043       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:08:54.438104       1 main.go:299] handling current node
	I0807 19:08:54.438130       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:08:54.438142       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [9e1010d7bf2b37a9df7dbeb499b0d6b90e9a197e8cbec1c0234009ecf9494d7d] <==
	I0807 19:02:24.745720       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:02:34.755016       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:02:34.755232       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:02:34.755422       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:02:34.755450       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:02:34.755569       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:02:34.755590       1 main.go:299] handling current node
	I0807 19:02:44.750648       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:02:44.750696       1 main.go:299] handling current node
	I0807 19:02:44.750721       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:02:44.750726       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:02:44.750880       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:02:44.750906       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:02:54.752355       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:02:54.752521       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:02:54.752684       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:02:54.752728       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	I0807 19:02:54.752796       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:02:54.752815       1 main.go:299] handling current node
	I0807 19:03:04.748160       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0807 19:03:04.748209       1 main.go:299] handling current node
	I0807 19:03:04.748251       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I0807 19:03:04.748258       1 main.go:322] Node multinode-334028-m02 has CIDR [10.244.1.0/24] 
	I0807 19:03:04.748427       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0807 19:03:04.748454       1 main.go:322] Node multinode-334028-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c2a6a484bfabc40ce8eae1eac6019019717ddce9ac1ffc46e3379ae00ec795ef] <==
	I0807 19:04:57.148180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0807 19:04:57.150307       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 19:04:57.159706       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0807 19:04:57.159848       1 shared_informer.go:320] Caches are synced for configmaps
	I0807 19:04:57.162511       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0807 19:04:57.162606       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0807 19:04:57.162771       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0807 19:04:57.170088       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0807 19:04:57.170615       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 19:04:57.170677       1 policy_source.go:224] refreshing policies
	I0807 19:04:57.171276       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0807 19:04:57.171790       1 aggregator.go:165] initial CRD sync complete...
	I0807 19:04:57.171853       1 autoregister_controller.go:141] Starting autoregister controller
	I0807 19:04:57.171883       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0807 19:04:57.171913       1 cache.go:39] Caches are synced for autoregister controller
	E0807 19:04:57.196220       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0807 19:04:57.249805       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 19:04:58.053754       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0807 19:05:00.329424       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 19:05:00.448432       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 19:05:00.460846       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 19:05:00.525362       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0807 19:05:00.531024       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0807 19:05:10.205251       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0807 19:05:10.251561       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [da12cb48b4b16cc191533c409613126d0b4f8e6a4ccbea87adfe234ab45f2072] <==
	W0807 19:03:14.220566       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.220601       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.220632       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.221442       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.222104       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.222136       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.226506       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227171       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227705       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227788       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227846       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227889       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.227917       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228000       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228043       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228081       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228088       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228123       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228126       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228157       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228198       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228230       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228246       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228277       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0807 19:03:14.228293       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c02a1136327b0d6d1e03a629f5eca7010317f50e10a52c19e53231832562d823] <==
	I0807 19:05:37.957616       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m02\" does not exist"
	I0807 19:05:37.972630       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m02" podCIDRs=["10.244.1.0/24"]
	I0807 19:05:39.874350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.271µs"
	I0807 19:05:39.888146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.508µs"
	I0807 19:05:39.912485       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.897µs"
	I0807 19:05:39.945260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.787µs"
	I0807 19:05:39.953674       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="150.497µs"
	I0807 19:05:39.968260       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.056µs"
	I0807 19:05:57.611804       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:05:57.631570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.292µs"
	I0807 19:05:57.646036       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.546µs"
	I0807 19:06:01.444898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.061209ms"
	I0807 19:06:01.445186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.991µs"
	I0807 19:06:15.951437       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:06:17.054561       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m03\" does not exist"
	I0807 19:06:17.054649       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:06:17.070057       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m03" podCIDRs=["10.244.2.0/24"]
	I0807 19:06:36.749382       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m03"
	I0807 19:06:42.300739       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:07:20.354655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.328372ms"
	I0807 19:07:20.354740       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.595µs"
	I0807 19:07:30.150186       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sgwkv"
	I0807 19:07:30.177582       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sgwkv"
	I0807 19:07:30.177627       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-48b87"
	I0807 19:07:30.198386       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-48b87"
	
	
	==> kube-controller-manager [ffc63a732f6bfc9a377d254d375e694675ac8b2d929677be06d8a2a3ba048d88] <==
	I0807 18:58:57.800526       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m02\" does not exist"
	I0807 18:58:57.813589       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m02" podCIDRs=["10.244.1.0/24"]
	I0807 18:58:58.112191       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-334028-m02"
	I0807 18:59:18.518118       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 18:59:20.912998       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.705993ms"
	I0807 18:59:20.938487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.429681ms"
	I0807 18:59:20.965308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.727232ms"
	I0807 18:59:20.965423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.323µs"
	I0807 18:59:24.845699       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.855256ms"
	I0807 18:59:24.845796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.106µs"
	I0807 18:59:24.939652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.725402ms"
	I0807 18:59:24.940282       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.746µs"
	I0807 19:00:00.024013       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m03\" does not exist"
	I0807 19:00:00.024140       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:00:00.060220       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m03" podCIDRs=["10.244.2.0/24"]
	I0807 19:00:03.136267       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-334028-m03"
	I0807 19:00:19.481826       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:00:47.635182       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:00:48.697686       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-334028-m03\" does not exist"
	I0807 19:00:48.697831       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:00:48.711394       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-334028-m03" podCIDRs=["10.244.3.0/24"]
	I0807 19:01:08.427403       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m03"
	I0807 19:01:53.194780       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-334028-m02"
	I0807 19:01:53.245692       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.462347ms"
	I0807 19:01:53.245896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.357µs"
	
	
	==> kube-proxy [2ca940561e18ec8f3bb688e8d5660c051550eb29e941f7bc1dac6f07389bfe6b] <==
	I0807 18:58:11.317138       1 server_linux.go:69] "Using iptables proxy"
	I0807 18:58:11.332511       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0807 18:58:11.368729       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 18:58:11.368761       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 18:58:11.368778       1 server_linux.go:165] "Using iptables Proxier"
	I0807 18:58:11.371919       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 18:58:11.372222       1 server.go:872] "Version info" version="v1.30.3"
	I0807 18:58:11.372256       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 18:58:11.374327       1 config.go:101] "Starting endpoint slice config controller"
	I0807 18:58:11.374367       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 18:58:11.374663       1 config.go:192] "Starting service config controller"
	I0807 18:58:11.374695       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 18:58:11.375134       1 config.go:319] "Starting node config controller"
	I0807 18:58:11.375141       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 18:58:11.475120       1 shared_informer.go:320] Caches are synced for service config
	I0807 18:58:11.475225       1 shared_informer.go:320] Caches are synced for node config
	I0807 18:58:11.475237       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [5a6f5ef6794eba9dd95c4e793e7876f09eb753460c6e50bd9472c0bbc7e310c8] <==
	I0807 19:04:53.752917       1 server_linux.go:69] "Using iptables proxy"
	I0807 19:04:57.153455       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	I0807 19:04:57.273137       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 19:04:57.273198       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 19:04:57.273217       1 server_linux.go:165] "Using iptables Proxier"
	I0807 19:04:57.278930       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 19:04:57.279290       1 server.go:872] "Version info" version="v1.30.3"
	I0807 19:04:57.279321       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:04:57.284085       1 config.go:192] "Starting service config controller"
	I0807 19:04:57.284127       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 19:04:57.284154       1 config.go:101] "Starting endpoint slice config controller"
	I0807 19:04:57.284158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 19:04:57.286660       1 config.go:319] "Starting node config controller"
	I0807 19:04:57.286688       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 19:04:57.386153       1 shared_informer.go:320] Caches are synced for service config
	I0807 19:04:57.386567       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 19:04:57.387025       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6da107968aee7b1a85d8ed6e65c7b5c26a240a842a8757880d93fe69fc468c79] <==
	E0807 18:57:52.972092       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:57:52.972318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0807 18:57:52.972352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0807 18:57:53.788120       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0807 18:57:53.788174       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0807 18:57:53.872709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0807 18:57:53.872758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0807 18:57:53.881725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0807 18:57:53.881885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0807 18:57:53.902046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0807 18:57:53.902140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0807 18:57:53.916819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0807 18:57:53.916906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0807 18:57:54.002529       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0807 18:57:54.002573       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0807 18:57:54.010548       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0807 18:57:54.010595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0807 18:57:54.146725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0807 18:57:54.147185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0807 18:57:54.219300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0807 18:57:54.219348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0807 18:57:54.393198       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0807 18:57:54.393319       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0807 18:57:56.559506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0807 19:03:14.206742       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [76c8635b399f68b0e1148f4296d2cfa7abc38b56f9f4d3d37843a72b598d87da] <==
	I0807 19:04:54.607677       1 serving.go:380] Generated self-signed cert in-memory
	W0807 19:04:57.093518       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 19:04:57.093565       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 19:04:57.093575       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 19:04:57.093581       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 19:04:57.147765       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0807 19:04:57.147798       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:04:57.157151       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0807 19:04:57.157202       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:04:57.157805       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0807 19:04:57.157874       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 19:04:57.258169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: E0807 19:05:00.957809    3880 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-multinode-334028\" already exists" pod="kube-system/etcd-multinode-334028"
	Aug 07 19:05:00 multinode-334028 kubelet[3880]: E0807 19:05:00.963430    3880 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-334028\" already exists" pod="kube-system/kube-apiserver-multinode-334028"
	Aug 07 19:05:01 multinode-334028 kubelet[3880]: I0807 19:05:01.005994    3880 scope.go:117] "RemoveContainer" containerID="dd95ca599aa17b3f965eeaa38582df348d65516309e82e2f5926f8d7c9c9b1b0"
	Aug 07 19:05:01 multinode-334028 kubelet[3880]: I0807 19:05:01.007440    3880 scope.go:117] "RemoveContainer" containerID="b6712261191c5a6f016fcefcfcc7676aef8010b08ed7cb0e1489962bca3dae99"
	Aug 07 19:05:09 multinode-334028 kubelet[3880]: I0807 19:05:09.650261    3880 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 07 19:05:59 multinode-334028 kubelet[3880]: E0807 19:05:59.892164    3880 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:05:59 multinode-334028 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:05:59 multinode-334028 kubelet[3880]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:05:59 multinode-334028 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:05:59 multinode-334028 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 19:06:59 multinode-334028 kubelet[3880]: E0807 19:06:59.890708    3880 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:06:59 multinode-334028 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:06:59 multinode-334028 kubelet[3880]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:06:59 multinode-334028 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:06:59 multinode-334028 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 19:07:59 multinode-334028 kubelet[3880]: E0807 19:07:59.891140    3880 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:07:59 multinode-334028 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:07:59 multinode-334028 kubelet[3880]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:07:59 multinode-334028 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:07:59 multinode-334028 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 07 19:08:59 multinode-334028 kubelet[3880]: E0807 19:08:59.890799    3880 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 07 19:08:59 multinode-334028 kubelet[3880]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 07 19:08:59 multinode-334028 kubelet[3880]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 07 19:08:59 multinode-334028 kubelet[3880]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 07 19:08:59 multinode-334028 kubelet[3880]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 19:09:03.257495   64481 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19389-20864/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-334028 -n multinode-334028
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-334028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.45s)

                                                
                                    
x
+
TestPreload (354.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-988014 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-988014 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m31.779332152s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-988014 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-988014 image pull gcr.io/k8s-minikube/busybox: (3.192519987s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-988014
E0807 19:16:31.076372   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-988014: exit status 82 (2m0.463923542s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-988014"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-988014 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-07 19:18:30.742185823 +0000 UTC m=+6154.984049639
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-988014 -n test-preload-988014
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-988014 -n test-preload-988014: exit status 3 (18.418109772s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 19:18:49.156558   68112 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host
	E0807 19:18:49.156579   68112 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.77:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-988014" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-988014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-988014
--- FAIL: TestPreload (354.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (429.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235652 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-235652 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m0.063992328s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-235652] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-235652" primary control-plane node in "kubernetes-upgrade-235652" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 19:20:42.031686   69202 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:20:42.031947   69202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:20:42.031959   69202 out.go:304] Setting ErrFile to fd 2...
	I0807 19:20:42.031966   69202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:20:42.032248   69202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 19:20:42.032931   69202 out.go:298] Setting JSON to false
	I0807 19:20:42.034115   69202 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10988,"bootTime":1723047454,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 19:20:42.034191   69202 start.go:139] virtualization: kvm guest
	I0807 19:20:42.035867   69202 out.go:177] * [kubernetes-upgrade-235652] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 19:20:42.037231   69202 notify.go:220] Checking for updates...
	I0807 19:20:42.038413   69202 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:20:42.041489   69202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:20:42.044139   69202 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 19:20:42.046879   69202 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:20:42.048040   69202 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 19:20:42.049202   69202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:20:42.050561   69202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:20:42.088708   69202 out.go:177] * Using the kvm2 driver based on user configuration
	I0807 19:20:42.090107   69202 start.go:297] selected driver: kvm2
	I0807 19:20:42.090125   69202 start.go:901] validating driver "kvm2" against <nil>
	I0807 19:20:42.090139   69202 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:20:42.091114   69202 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:20:42.101596   69202 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 19:20:42.121860   69202 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 19:20:42.121907   69202 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 19:20:42.122138   69202 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 19:20:42.122162   69202 cni.go:84] Creating CNI manager for ""
	I0807 19:20:42.122174   69202 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:20:42.122183   69202 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 19:20:42.122252   69202 start.go:340] cluster config:
	{Name:kubernetes-upgrade-235652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-235652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:20:42.122370   69202 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:20:42.124312   69202 out.go:177] * Starting "kubernetes-upgrade-235652" primary control-plane node in "kubernetes-upgrade-235652" cluster
	I0807 19:20:42.125589   69202 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0807 19:20:42.125640   69202 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0807 19:20:42.125649   69202 cache.go:56] Caching tarball of preloaded images
	I0807 19:20:42.125778   69202 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 19:20:42.125789   69202 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0807 19:20:42.126120   69202 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/config.json ...
	I0807 19:20:42.126143   69202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/config.json: {Name:mk44e3bc137a303c4d0baf061b0fdffe465ba655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:20:42.126274   69202 start.go:360] acquireMachinesLock for kubernetes-upgrade-235652: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 19:21:08.225862   69202 start.go:364] duration metric: took 26.099542864s to acquireMachinesLock for "kubernetes-upgrade-235652"
	I0807 19:21:08.225912   69202 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-235652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-235652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 19:21:08.226020   69202 start.go:125] createHost starting for "" (driver="kvm2")
	I0807 19:21:08.228338   69202 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0807 19:21:08.228523   69202 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:21:08.228579   69202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:21:08.246418   69202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0807 19:21:08.246873   69202 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:21:08.247379   69202 main.go:141] libmachine: Using API Version  1
	I0807 19:21:08.247406   69202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:21:08.247748   69202 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:21:08.247952   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetMachineName
	I0807 19:21:08.248115   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:21:08.248460   69202 start.go:159] libmachine.API.Create for "kubernetes-upgrade-235652" (driver="kvm2")
	I0807 19:21:08.248493   69202 client.go:168] LocalClient.Create starting
	I0807 19:21:08.248535   69202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem
	I0807 19:21:08.248576   69202 main.go:141] libmachine: Decoding PEM data...
	I0807 19:21:08.248600   69202 main.go:141] libmachine: Parsing certificate...
	I0807 19:21:08.248674   69202 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem
	I0807 19:21:08.248698   69202 main.go:141] libmachine: Decoding PEM data...
	I0807 19:21:08.248718   69202 main.go:141] libmachine: Parsing certificate...
	I0807 19:21:08.248742   69202 main.go:141] libmachine: Running pre-create checks...
	I0807 19:21:08.248760   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .PreCreateCheck
	I0807 19:21:08.249088   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetConfigRaw
	I0807 19:21:08.249524   69202 main.go:141] libmachine: Creating machine...
	I0807 19:21:08.249542   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .Create
	I0807 19:21:08.249681   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Creating KVM machine...
	I0807 19:21:08.250844   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found existing default KVM network
	I0807 19:21:08.251655   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:08.251489   69544 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ed:74:f6} reservation:<nil>}
	I0807 19:21:08.252273   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:08.252188   69544 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fa50}
	I0807 19:21:08.252341   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | created network xml: 
	I0807 19:21:08.252364   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | <network>
	I0807 19:21:08.252375   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG |   <name>mk-kubernetes-upgrade-235652</name>
	I0807 19:21:08.252386   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG |   <dns enable='no'/>
	I0807 19:21:08.252416   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG |   
	I0807 19:21:08.252427   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0807 19:21:08.252433   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG |     <dhcp>
	I0807 19:21:08.252439   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0807 19:21:08.252452   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG |     </dhcp>
	I0807 19:21:08.252457   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG |   </ip>
	I0807 19:21:08.252466   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG |   
	I0807 19:21:08.252476   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | </network>
	I0807 19:21:08.252483   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | 
	I0807 19:21:08.257976   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | trying to create private KVM network mk-kubernetes-upgrade-235652 192.168.50.0/24...
	I0807 19:21:08.329435   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | private KVM network mk-kubernetes-upgrade-235652 192.168.50.0/24 created
	I0807 19:21:08.329483   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:08.329396   69544 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:21:08.329505   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Setting up store path in /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652 ...
	I0807 19:21:08.329518   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Building disk image from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 19:21:08.329538   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Downloading /home/jenkins/minikube-integration/19389-20864/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 19:21:08.562675   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:08.562521   69544 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa...
	I0807 19:21:08.778552   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:08.778396   69544 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/kubernetes-upgrade-235652.rawdisk...
	I0807 19:21:08.778588   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Writing magic tar header
	I0807 19:21:08.778666   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Writing SSH key tar header
	I0807 19:21:08.778714   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:08.778512   69544 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652 ...
	I0807 19:21:08.778732   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652 (perms=drwx------)
	I0807 19:21:08.778751   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines (perms=drwxr-xr-x)
	I0807 19:21:08.778764   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube (perms=drwxr-xr-x)
	I0807 19:21:08.778776   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864 (perms=drwxrwxr-x)
	I0807 19:21:08.778799   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0807 19:21:08.778813   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652
	I0807 19:21:08.778827   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines
	I0807 19:21:08.778840   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:21:08.778851   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864
	I0807 19:21:08.778862   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0807 19:21:08.778872   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Checking permissions on dir: /home/jenkins
	I0807 19:21:08.778895   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0807 19:21:08.778905   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Checking permissions on dir: /home
	I0807 19:21:08.778915   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Skipping /home - not owner
	I0807 19:21:08.778928   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Creating domain...
	I0807 19:21:08.780029   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) define libvirt domain using xml: 
	I0807 19:21:08.780052   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) <domain type='kvm'>
	I0807 19:21:08.780062   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   <name>kubernetes-upgrade-235652</name>
	I0807 19:21:08.780071   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   <memory unit='MiB'>2200</memory>
	I0807 19:21:08.780085   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   <vcpu>2</vcpu>
	I0807 19:21:08.780092   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   <features>
	I0807 19:21:08.780099   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <acpi/>
	I0807 19:21:08.780122   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <apic/>
	I0807 19:21:08.780134   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <pae/>
	I0807 19:21:08.780144   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     
	I0807 19:21:08.780152   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   </features>
	I0807 19:21:08.780164   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   <cpu mode='host-passthrough'>
	I0807 19:21:08.780172   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   
	I0807 19:21:08.780181   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   </cpu>
	I0807 19:21:08.780190   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   <os>
	I0807 19:21:08.780231   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <type>hvm</type>
	I0807 19:21:08.780247   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <boot dev='cdrom'/>
	I0807 19:21:08.780257   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <boot dev='hd'/>
	I0807 19:21:08.780265   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <bootmenu enable='no'/>
	I0807 19:21:08.780275   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   </os>
	I0807 19:21:08.780283   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   <devices>
	I0807 19:21:08.780293   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <disk type='file' device='cdrom'>
	I0807 19:21:08.780307   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/boot2docker.iso'/>
	I0807 19:21:08.780319   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <target dev='hdc' bus='scsi'/>
	I0807 19:21:08.780328   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <readonly/>
	I0807 19:21:08.780338   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     </disk>
	I0807 19:21:08.780348   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <disk type='file' device='disk'>
	I0807 19:21:08.780361   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0807 19:21:08.780379   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/kubernetes-upgrade-235652.rawdisk'/>
	I0807 19:21:08.780390   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <target dev='hda' bus='virtio'/>
	I0807 19:21:08.780402   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     </disk>
	I0807 19:21:08.780412   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <interface type='network'>
	I0807 19:21:08.780424   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <source network='mk-kubernetes-upgrade-235652'/>
	I0807 19:21:08.780435   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <model type='virtio'/>
	I0807 19:21:08.780444   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     </interface>
	I0807 19:21:08.780455   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <interface type='network'>
	I0807 19:21:08.780469   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <source network='default'/>
	I0807 19:21:08.780479   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <model type='virtio'/>
	I0807 19:21:08.780491   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     </interface>
	I0807 19:21:08.780501   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <serial type='pty'>
	I0807 19:21:08.780513   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <target port='0'/>
	I0807 19:21:08.780523   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     </serial>
	I0807 19:21:08.780532   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <console type='pty'>
	I0807 19:21:08.780550   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <target type='serial' port='0'/>
	I0807 19:21:08.780563   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     </console>
	I0807 19:21:08.780573   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     <rng model='virtio'>
	I0807 19:21:08.780587   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)       <backend model='random'>/dev/random</backend>
	I0807 19:21:08.780597   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     </rng>
	I0807 19:21:08.780608   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     
	I0807 19:21:08.780618   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)     
	I0807 19:21:08.780627   69202 main.go:141] libmachine: (kubernetes-upgrade-235652)   </devices>
	I0807 19:21:08.780642   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) </domain>
	I0807 19:21:08.780656   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) 
	I0807 19:21:08.784994   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:9e:f0:2a in network default
	I0807 19:21:08.785622   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Ensuring networks are active...
	I0807 19:21:08.785649   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:08.786427   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Ensuring network default is active
	I0807 19:21:08.786777   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Ensuring network mk-kubernetes-upgrade-235652 is active
	I0807 19:21:08.787296   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Getting domain xml...
	I0807 19:21:08.787984   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Creating domain...
	I0807 19:21:10.135024   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Waiting to get IP...
	I0807 19:21:10.136028   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:10.136523   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:10.136570   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:10.136512   69544 retry.go:31] will retry after 208.396642ms: waiting for machine to come up
	I0807 19:21:10.347160   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:10.347659   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:10.347688   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:10.347617   69544 retry.go:31] will retry after 241.150963ms: waiting for machine to come up
	I0807 19:21:10.590179   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:10.590664   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:10.590697   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:10.590626   69544 retry.go:31] will retry after 335.371025ms: waiting for machine to come up
	I0807 19:21:10.927272   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:10.927781   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:10.927817   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:10.927730   69544 retry.go:31] will retry after 403.336266ms: waiting for machine to come up
	I0807 19:21:11.332469   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:11.333102   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:11.333125   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:11.333045   69544 retry.go:31] will retry after 753.055089ms: waiting for machine to come up
	I0807 19:21:12.087953   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:12.088481   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:12.088511   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:12.088425   69544 retry.go:31] will retry after 835.823479ms: waiting for machine to come up
	I0807 19:21:12.925426   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:12.925843   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:12.925871   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:12.925794   69544 retry.go:31] will retry after 739.580229ms: waiting for machine to come up
	I0807 19:21:13.667468   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:13.667959   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:13.667982   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:13.667924   69544 retry.go:31] will retry after 1.006077289s: waiting for machine to come up
	I0807 19:21:14.675916   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:14.676357   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:14.676401   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:14.676283   69544 retry.go:31] will retry after 1.836728824s: waiting for machine to come up
	I0807 19:21:16.515185   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:16.515630   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:16.515661   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:16.515551   69544 retry.go:31] will retry after 1.647025772s: waiting for machine to come up
	I0807 19:21:18.164530   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:18.164941   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:18.164971   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:18.164894   69544 retry.go:31] will retry after 2.825372113s: waiting for machine to come up
	I0807 19:21:20.993291   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:20.993645   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:20.993690   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:20.993609   69544 retry.go:31] will retry after 3.11350567s: waiting for machine to come up
	I0807 19:21:24.108371   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:24.108760   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:24.108786   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:24.108710   69544 retry.go:31] will retry after 3.77235016s: waiting for machine to come up
	I0807 19:21:27.885116   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:27.885537   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find current IP address of domain kubernetes-upgrade-235652 in network mk-kubernetes-upgrade-235652
	I0807 19:21:27.885581   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | I0807 19:21:27.885489   69544 retry.go:31] will retry after 5.593470145s: waiting for machine to come up
	I0807 19:21:33.480621   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.481052   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Found IP for machine: 192.168.50.208
	I0807 19:21:33.481078   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has current primary IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.481087   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Reserving static IP address...
	I0807 19:21:33.481434   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-235652", mac: "52:54:00:24:38:b8", ip: "192.168.50.208"} in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.552329   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Getting to WaitForSSH function...
	I0807 19:21:33.552361   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Reserved static IP address: 192.168.50.208
	I0807 19:21:33.552374   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Waiting for SSH to be available...
	I0807 19:21:33.555006   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.555395   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:minikube Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:33.555424   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.555593   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Using SSH client type: external
	I0807 19:21:33.555630   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa (-rw-------)
	I0807 19:21:33.555680   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 19:21:33.555701   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | About to run SSH command:
	I0807 19:21:33.555720   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | exit 0
	I0807 19:21:33.680194   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | SSH cmd err, output: <nil>: 
	I0807 19:21:33.680530   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) KVM machine creation complete!
	I0807 19:21:33.680816   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetConfigRaw
	I0807 19:21:33.681420   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:21:33.681587   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:21:33.681721   69202 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 19:21:33.681734   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetState
	I0807 19:21:33.682929   69202 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 19:21:33.682953   69202 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 19:21:33.682959   69202 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 19:21:33.682965   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:33.685002   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.685341   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:33.685363   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.685512   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:33.685693   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:33.685859   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:33.685996   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:33.686175   69202 main.go:141] libmachine: Using SSH client type: native
	I0807 19:21:33.686361   69202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:21:33.686374   69202 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 19:21:33.795629   69202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:21:33.795661   69202 main.go:141] libmachine: Detecting the provisioner...
	I0807 19:21:33.795673   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:33.798587   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.798982   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:33.799010   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.799154   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:33.799357   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:33.799507   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:33.799625   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:33.799753   69202 main.go:141] libmachine: Using SSH client type: native
	I0807 19:21:33.799946   69202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:21:33.799959   69202 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 19:21:33.909132   69202 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 19:21:33.909207   69202 main.go:141] libmachine: found compatible host: buildroot
	I0807 19:21:33.909220   69202 main.go:141] libmachine: Provisioning with buildroot...
	I0807 19:21:33.909227   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetMachineName
	I0807 19:21:33.909464   69202 buildroot.go:166] provisioning hostname "kubernetes-upgrade-235652"
	I0807 19:21:33.909488   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetMachineName
	I0807 19:21:33.909648   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:33.912245   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.912575   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:33.912607   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:33.912793   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:33.912983   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:33.913136   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:33.913266   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:33.913415   69202 main.go:141] libmachine: Using SSH client type: native
	I0807 19:21:33.913630   69202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:21:33.913645   69202 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-235652 && echo "kubernetes-upgrade-235652" | sudo tee /etc/hostname
	I0807 19:21:34.039135   69202 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-235652
	
	I0807 19:21:34.039185   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:34.041770   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.042078   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:34.042107   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.042278   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:34.042463   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:34.042618   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:34.042791   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:34.042940   69202 main.go:141] libmachine: Using SSH client type: native
	I0807 19:21:34.043154   69202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:21:34.043172   69202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-235652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-235652/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-235652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:21:34.161162   69202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:21:34.161193   69202 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 19:21:34.161224   69202 buildroot.go:174] setting up certificates
	I0807 19:21:34.161239   69202 provision.go:84] configureAuth start
	I0807 19:21:34.161248   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetMachineName
	I0807 19:21:34.161557   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetIP
	I0807 19:21:34.164112   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.164422   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:34.164443   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.164601   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:34.166778   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.167057   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:34.167076   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.167224   69202 provision.go:143] copyHostCerts
	I0807 19:21:34.167272   69202 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 19:21:34.167282   69202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 19:21:34.167333   69202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 19:21:34.167430   69202 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 19:21:34.167442   69202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 19:21:34.167484   69202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 19:21:34.167550   69202 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 19:21:34.167559   69202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 19:21:34.167578   69202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 19:21:34.167622   69202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-235652 san=[127.0.0.1 192.168.50.208 kubernetes-upgrade-235652 localhost minikube]
	I0807 19:21:34.397479   69202 provision.go:177] copyRemoteCerts
	I0807 19:21:34.397535   69202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:21:34.397560   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:34.400071   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.400403   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:34.400433   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.400609   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:34.400802   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:34.400951   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:34.401124   69202 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa Username:docker}
	I0807 19:21:34.486882   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 19:21:34.511400   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:21:34.535536   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0807 19:21:34.559840   69202 provision.go:87] duration metric: took 398.58775ms to configureAuth
	I0807 19:21:34.559872   69202 buildroot.go:189] setting minikube options for container-runtime
	I0807 19:21:34.560054   69202 config.go:182] Loaded profile config "kubernetes-upgrade-235652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0807 19:21:34.560164   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:34.563708   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.564167   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:34.564219   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.564390   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:34.564651   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:34.564852   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:34.565041   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:34.565213   69202 main.go:141] libmachine: Using SSH client type: native
	I0807 19:21:34.565385   69202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:21:34.565406   69202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 19:21:34.826947   69202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 19:21:34.826978   69202 main.go:141] libmachine: Checking connection to Docker...
	I0807 19:21:34.826986   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetURL
	I0807 19:21:34.828389   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | Using libvirt version 6000000
	I0807 19:21:34.831063   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.831418   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:34.831454   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.831645   69202 main.go:141] libmachine: Docker is up and running!
	I0807 19:21:34.831658   69202 main.go:141] libmachine: Reticulating splines...
	I0807 19:21:34.831666   69202 client.go:171] duration metric: took 26.583161743s to LocalClient.Create
	I0807 19:21:34.831692   69202 start.go:167] duration metric: took 26.583231937s to libmachine.API.Create "kubernetes-upgrade-235652"
	I0807 19:21:34.831702   69202 start.go:293] postStartSetup for "kubernetes-upgrade-235652" (driver="kvm2")
	I0807 19:21:34.831718   69202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:21:34.831741   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:21:34.831953   69202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:21:34.831985   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:34.834030   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.834331   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:34.834360   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.834507   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:34.834721   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:34.834870   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:34.834985   69202 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa Username:docker}
	I0807 19:21:34.923000   69202 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:21:34.927247   69202 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 19:21:34.927268   69202 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 19:21:34.927330   69202 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 19:21:34.927412   69202 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 19:21:34.927499   69202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:21:34.936945   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:21:34.963158   69202 start.go:296] duration metric: took 131.43974ms for postStartSetup
	I0807 19:21:34.963216   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetConfigRaw
	I0807 19:21:34.963825   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetIP
	I0807 19:21:34.966751   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.967139   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:34.967182   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.967415   69202 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/config.json ...
	I0807 19:21:34.967606   69202 start.go:128] duration metric: took 26.741574842s to createHost
	I0807 19:21:34.967627   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:34.970126   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.970534   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:34.970562   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:34.970743   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:34.970953   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:34.971146   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:34.971310   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:34.971505   69202 main.go:141] libmachine: Using SSH client type: native
	I0807 19:21:34.971716   69202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:21:34.971731   69202 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0807 19:21:35.080907   69202 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723058495.058766526
	
	I0807 19:21:35.080939   69202 fix.go:216] guest clock: 1723058495.058766526
	I0807 19:21:35.080949   69202 fix.go:229] Guest: 2024-08-07 19:21:35.058766526 +0000 UTC Remote: 2024-08-07 19:21:34.9676175 +0000 UTC m=+52.989048734 (delta=91.149026ms)
	I0807 19:21:35.080994   69202 fix.go:200] guest clock delta is within tolerance: 91.149026ms
	I0807 19:21:35.080999   69202 start.go:83] releasing machines lock for "kubernetes-upgrade-235652", held for 26.85511896s
	I0807 19:21:35.081031   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:21:35.081355   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetIP
	I0807 19:21:35.084273   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:35.084728   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:35.084751   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:35.084966   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:21:35.085463   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:21:35.085663   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:21:35.085740   69202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 19:21:35.085786   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:35.086046   69202 ssh_runner.go:195] Run: cat /version.json
	I0807 19:21:35.086090   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:21:35.088677   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:35.088830   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:35.089072   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:35.089099   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:35.089239   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:35.089267   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:35.089270   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:35.089463   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:35.089483   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:21:35.089642   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:21:35.089657   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:35.089840   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:21:35.089872   69202 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa Username:docker}
	I0807 19:21:35.089962   69202 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa Username:docker}
	I0807 19:21:35.198344   69202 ssh_runner.go:195] Run: systemctl --version
	I0807 19:21:35.205125   69202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 19:21:35.369878   69202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 19:21:35.378593   69202 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 19:21:35.378684   69202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:21:35.398579   69202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 19:21:35.398600   69202 start.go:495] detecting cgroup driver to use...
	I0807 19:21:35.398668   69202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 19:21:35.415931   69202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:21:35.433037   69202 docker.go:217] disabling cri-docker service (if available) ...
	I0807 19:21:35.433095   69202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 19:21:35.448509   69202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 19:21:35.466412   69202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 19:21:35.599789   69202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 19:21:35.748051   69202 docker.go:233] disabling docker service ...
	I0807 19:21:35.748120   69202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 19:21:35.765616   69202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 19:21:35.779212   69202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 19:21:35.925715   69202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 19:21:36.061186   69202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 19:21:36.077098   69202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:21:36.096344   69202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0807 19:21:36.096411   69202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:21:36.107793   69202 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 19:21:36.107867   69202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:21:36.119275   69202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:21:36.130969   69202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:21:36.146086   69202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:21:36.158248   69202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:21:36.169287   69202 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 19:21:36.169348   69202 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 19:21:36.185490   69202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:21:36.197447   69202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:21:36.320438   69202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 19:21:36.474464   69202 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 19:21:36.474534   69202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 19:21:36.480015   69202 start.go:563] Will wait 60s for crictl version
	I0807 19:21:36.480093   69202 ssh_runner.go:195] Run: which crictl
	I0807 19:21:36.484966   69202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:21:36.526235   69202 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 19:21:36.526324   69202 ssh_runner.go:195] Run: crio --version
	I0807 19:21:36.556756   69202 ssh_runner.go:195] Run: crio --version
	I0807 19:21:36.604245   69202 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0807 19:21:36.605479   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetIP
	I0807 19:21:36.608437   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:36.608800   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:21:23 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:21:36.608820   69202 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:21:36.609086   69202 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0807 19:21:36.613687   69202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:21:36.628342   69202 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-235652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-235652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:21:36.628516   69202 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0807 19:21:36.628603   69202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:21:36.685818   69202 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0807 19:21:36.685900   69202 ssh_runner.go:195] Run: which lz4
	I0807 19:21:36.690086   69202 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0807 19:21:36.694464   69202 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 19:21:36.694497   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0807 19:21:38.550651   69202 crio.go:462] duration metric: took 1.860609076s to copy over tarball
	I0807 19:21:38.550733   69202 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 19:21:41.483086   69202 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.932317193s)
	I0807 19:21:41.483121   69202 crio.go:469] duration metric: took 2.932438095s to extract the tarball
	I0807 19:21:41.483140   69202 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 19:21:41.533207   69202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:21:41.586367   69202 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0807 19:21:41.586395   69202 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0807 19:21:41.586443   69202 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 19:21:41.586492   69202 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0807 19:21:41.586519   69202 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0807 19:21:41.586534   69202 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0807 19:21:41.586552   69202 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0807 19:21:41.586506   69202 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0807 19:21:41.586690   69202 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0807 19:21:41.586695   69202 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0807 19:21:41.588218   69202 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0807 19:21:41.588243   69202 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0807 19:21:41.588248   69202 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 19:21:41.588253   69202 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0807 19:21:41.588225   69202 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0807 19:21:41.588224   69202 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0807 19:21:41.588218   69202 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0807 19:21:41.588378   69202 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0807 19:21:41.847583   69202 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0807 19:21:41.879130   69202 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0807 19:21:41.897800   69202 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0807 19:21:41.897841   69202 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0807 19:21:41.897892   69202 ssh_runner.go:195] Run: which crictl
	I0807 19:21:41.918662   69202 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0807 19:21:41.941812   69202 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0807 19:21:41.941912   69202 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0807 19:21:41.941949   69202 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0807 19:21:41.941982   69202 ssh_runner.go:195] Run: which crictl
	I0807 19:21:41.946781   69202 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0807 19:21:41.963060   69202 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0807 19:21:41.965628   69202 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0807 19:21:41.983774   69202 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0807 19:21:41.983825   69202 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0807 19:21:41.983876   69202 ssh_runner.go:195] Run: which crictl
	I0807 19:21:41.986346   69202 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0807 19:21:42.075479   69202 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0807 19:21:42.075628   69202 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0807 19:21:42.112681   69202 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0807 19:21:42.112742   69202 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0807 19:21:42.112737   69202 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0807 19:21:42.112769   69202 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0807 19:21:42.112794   69202 ssh_runner.go:195] Run: which crictl
	I0807 19:21:42.112807   69202 ssh_runner.go:195] Run: which crictl
	I0807 19:21:42.120278   69202 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0807 19:21:42.120321   69202 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0807 19:21:42.120358   69202 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0807 19:21:42.120399   69202 ssh_runner.go:195] Run: which crictl
	I0807 19:21:42.120480   69202 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0807 19:21:42.120508   69202 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0807 19:21:42.120541   69202 ssh_runner.go:195] Run: which crictl
	I0807 19:21:42.148050   69202 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0807 19:21:42.148131   69202 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0807 19:21:42.148172   69202 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0807 19:21:42.148172   69202 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0807 19:21:42.211032   69202 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0807 19:21:42.211209   69202 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0807 19:21:42.246855   69202 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0807 19:21:42.246920   69202 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0807 19:21:42.246934   69202 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0807 19:21:42.276680   69202 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0807 19:21:42.645965   69202 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 19:21:42.787942   69202 cache_images.go:92] duration metric: took 1.201529491s to LoadCachedImages
	W0807 19:21:42.788053   69202 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19389-20864/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19389-20864/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0807 19:21:42.788070   69202 kubeadm.go:934] updating node { 192.168.50.208 8443 v1.20.0 crio true true} ...
	I0807 19:21:42.788253   69202 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-235652 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-235652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:21:42.788369   69202 ssh_runner.go:195] Run: crio config
	I0807 19:21:42.846751   69202 cni.go:84] Creating CNI manager for ""
	I0807 19:21:42.846774   69202 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:21:42.846787   69202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:21:42.846804   69202 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-235652 NodeName:kubernetes-upgrade-235652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0807 19:21:42.846994   69202 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-235652"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:21:42.847074   69202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0807 19:21:42.857537   69202 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:21:42.857618   69202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:21:42.867562   69202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0807 19:21:42.886136   69202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:21:42.904965   69202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0807 19:21:42.925225   69202 ssh_runner.go:195] Run: grep 192.168.50.208	control-plane.minikube.internal$ /etc/hosts
	I0807 19:21:42.929268   69202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:21:42.941598   69202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:21:43.067009   69202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:21:43.095538   69202 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652 for IP: 192.168.50.208
	I0807 19:21:43.095563   69202 certs.go:194] generating shared ca certs ...
	I0807 19:21:43.095584   69202 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:21:43.095754   69202 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 19:21:43.095811   69202 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 19:21:43.095823   69202 certs.go:256] generating profile certs ...
	I0807 19:21:43.095891   69202 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/client.key
	I0807 19:21:43.095919   69202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/client.crt with IP's: []
	I0807 19:21:43.245838   69202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/client.crt ...
	I0807 19:21:43.245871   69202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/client.crt: {Name:mkfea2e16878fe64f66aeff5832c65c142ea4666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:21:43.246067   69202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/client.key ...
	I0807 19:21:43.246087   69202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/client.key: {Name:mk903d31421c9af9fd8ac26961a618856c799417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:21:43.246202   69202 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.key.baace47c
	I0807 19:21:43.246220   69202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.crt.baace47c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.208]
	I0807 19:21:43.484215   69202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.crt.baace47c ...
	I0807 19:21:43.484247   69202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.crt.baace47c: {Name:mk3b7706f071dc7d8e9896f70711be301734becb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:21:43.484400   69202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.key.baace47c ...
	I0807 19:21:43.484416   69202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.key.baace47c: {Name:mk814827f99e72daa61abc7a66000c6a5890ed8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:21:43.484497   69202 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.crt.baace47c -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.crt
	I0807 19:21:43.484572   69202 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.key.baace47c -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.key
	I0807 19:21:43.484623   69202 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.key
	I0807 19:21:43.484637   69202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.crt with IP's: []
	I0807 19:21:43.700757   69202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.crt ...
	I0807 19:21:43.700782   69202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.crt: {Name:mk274bdc020d4310a6279559d31a564f402ccadc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:21:43.700935   69202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.key ...
	I0807 19:21:43.700947   69202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.key: {Name:mk93134c43a858e9c1babd8e54ceacfd43a275cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:21:43.701110   69202 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 19:21:43.701146   69202 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 19:21:43.701157   69202 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 19:21:43.701176   69202 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:21:43.701198   69202 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:21:43.701218   69202 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 19:21:43.701257   69202 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:21:43.701764   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:21:43.731535   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:21:43.761077   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:21:43.788026   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 19:21:43.815714   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0807 19:21:43.844762   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 19:21:43.873081   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:21:43.903943   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 19:21:43.941569   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:21:43.975382   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 19:21:44.010653   69202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 19:21:44.043917   69202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:21:44.062420   69202 ssh_runner.go:195] Run: openssl version
	I0807 19:21:44.068424   69202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 19:21:44.079556   69202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 19:21:44.084345   69202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:21:44.084417   69202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 19:21:44.090789   69202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 19:21:44.101949   69202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 19:21:44.114288   69202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 19:21:44.118879   69202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:21:44.118944   69202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 19:21:44.125025   69202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:21:44.135638   69202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:21:44.146204   69202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:21:44.150762   69202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:21:44.150841   69202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:21:44.156487   69202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:21:44.167757   69202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:21:44.172024   69202 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 19:21:44.172081   69202 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-235652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-235652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:21:44.172170   69202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 19:21:44.172234   69202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:21:44.208423   69202 cri.go:89] found id: ""
	I0807 19:21:44.208496   69202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 19:21:44.218992   69202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 19:21:44.229104   69202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 19:21:44.239246   69202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 19:21:44.239269   69202 kubeadm.go:157] found existing configuration files:
	
	I0807 19:21:44.239309   69202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 19:21:44.248723   69202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 19:21:44.248773   69202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 19:21:44.258314   69202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 19:21:44.267475   69202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 19:21:44.267541   69202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 19:21:44.276823   69202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 19:21:44.285844   69202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 19:21:44.285907   69202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 19:21:44.295359   69202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 19:21:44.304247   69202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 19:21:44.304315   69202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 19:21:44.313790   69202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 19:21:44.592946   69202 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 19:23:42.774616   69202 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0807 19:23:42.774759   69202 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0807 19:23:42.776274   69202 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0807 19:23:42.776340   69202 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 19:23:42.776420   69202 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 19:23:42.776523   69202 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 19:23:42.776634   69202 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 19:23:42.776711   69202 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 19:23:42.778608   69202 out.go:204]   - Generating certificates and keys ...
	I0807 19:23:42.778700   69202 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 19:23:42.778772   69202 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 19:23:42.778863   69202 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 19:23:42.778951   69202 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 19:23:42.779038   69202 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 19:23:42.779108   69202 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 19:23:42.779185   69202 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 19:23:42.779357   69202 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-235652 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	I0807 19:23:42.779429   69202 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 19:23:42.779631   69202 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-235652 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	I0807 19:23:42.779736   69202 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 19:23:42.779833   69202 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 19:23:42.779895   69202 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 19:23:42.779965   69202 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 19:23:42.780037   69202 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 19:23:42.780117   69202 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 19:23:42.780235   69202 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 19:23:42.780314   69202 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 19:23:42.780462   69202 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 19:23:42.780583   69202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 19:23:42.780646   69202 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 19:23:42.780743   69202 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 19:23:42.782339   69202 out.go:204]   - Booting up control plane ...
	I0807 19:23:42.782457   69202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 19:23:42.782541   69202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 19:23:42.782629   69202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 19:23:42.782730   69202 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 19:23:42.782911   69202 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0807 19:23:42.783002   69202 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0807 19:23:42.783115   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:23:42.783408   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:23:42.783507   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:23:42.783742   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:23:42.783835   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:23:42.784101   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:23:42.784228   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:23:42.784466   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:23:42.784547   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:23:42.784795   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:23:42.784818   69202 kubeadm.go:310] 
	I0807 19:23:42.784868   69202 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0807 19:23:42.784922   69202 kubeadm.go:310] 		timed out waiting for the condition
	I0807 19:23:42.784932   69202 kubeadm.go:310] 
	I0807 19:23:42.784974   69202 kubeadm.go:310] 	This error is likely caused by:
	I0807 19:23:42.785026   69202 kubeadm.go:310] 		- The kubelet is not running
	I0807 19:23:42.785193   69202 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0807 19:23:42.785202   69202 kubeadm.go:310] 
	I0807 19:23:42.785353   69202 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0807 19:23:42.785411   69202 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0807 19:23:42.785463   69202 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0807 19:23:42.785471   69202 kubeadm.go:310] 
	I0807 19:23:42.785636   69202 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0807 19:23:42.785759   69202 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0807 19:23:42.785771   69202 kubeadm.go:310] 
	I0807 19:23:42.785902   69202 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0807 19:23:42.786037   69202 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0807 19:23:42.786155   69202 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0807 19:23:42.786274   69202 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0807 19:23:42.786328   69202 kubeadm.go:310] 
	W0807 19:23:42.786418   69202 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-235652 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-235652 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-235652 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-235652 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0807 19:23:42.786472   69202 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0807 19:23:44.512543   69202 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.726026414s)
	I0807 19:23:44.512658   69202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:23:44.538100   69202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 19:23:44.556530   69202 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 19:23:44.556562   69202 kubeadm.go:157] found existing configuration files:
	
	I0807 19:23:44.556625   69202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 19:23:44.572052   69202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 19:23:44.572125   69202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 19:23:44.605116   69202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 19:23:44.623609   69202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 19:23:44.623684   69202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 19:23:44.638419   69202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 19:23:44.652366   69202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 19:23:44.652441   69202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 19:23:44.667178   69202 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 19:23:44.681983   69202 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 19:23:44.682054   69202 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 19:23:44.695701   69202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 19:23:44.787411   69202 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0807 19:23:44.787847   69202 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 19:23:44.968567   69202 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 19:23:44.968714   69202 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 19:23:44.968822   69202 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 19:23:45.216923   69202 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 19:23:45.377120   69202 out.go:204]   - Generating certificates and keys ...
	I0807 19:23:45.377262   69202 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 19:23:45.377382   69202 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 19:23:45.377505   69202 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0807 19:23:45.377588   69202 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0807 19:23:45.377684   69202 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0807 19:23:45.377761   69202 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0807 19:23:45.377846   69202 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0807 19:23:45.377931   69202 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0807 19:23:45.378032   69202 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0807 19:23:45.378142   69202 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0807 19:23:45.378193   69202 kubeadm.go:310] [certs] Using the existing "sa" key
	I0807 19:23:45.378266   69202 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 19:23:45.461880   69202 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 19:23:45.653596   69202 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 19:23:46.082197   69202 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 19:23:46.186891   69202 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 19:23:46.202547   69202 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 19:23:46.203943   69202 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 19:23:46.204028   69202 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 19:23:46.367613   69202 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 19:23:46.476497   69202 out.go:204]   - Booting up control plane ...
	I0807 19:23:46.476668   69202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 19:23:46.476777   69202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 19:23:46.476879   69202 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 19:23:46.476992   69202 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 19:23:46.477237   69202 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0807 19:24:26.383015   69202 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0807 19:24:26.383132   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:24:26.383411   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:24:31.383597   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:24:31.383875   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:24:41.384456   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:24:41.384644   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:25:01.387945   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:25:01.388384   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:25:41.388470   69202 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0807 19:25:41.388684   69202 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0807 19:25:41.388700   69202 kubeadm.go:310] 
	I0807 19:25:41.388752   69202 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0807 19:25:41.388813   69202 kubeadm.go:310] 		timed out waiting for the condition
	I0807 19:25:41.388833   69202 kubeadm.go:310] 
	I0807 19:25:41.388862   69202 kubeadm.go:310] 	This error is likely caused by:
	I0807 19:25:41.388937   69202 kubeadm.go:310] 		- The kubelet is not running
	I0807 19:25:41.389094   69202 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0807 19:25:41.389107   69202 kubeadm.go:310] 
	I0807 19:25:41.389251   69202 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0807 19:25:41.389306   69202 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0807 19:25:41.389350   69202 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0807 19:25:41.389358   69202 kubeadm.go:310] 
	I0807 19:25:41.389449   69202 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0807 19:25:41.389521   69202 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0807 19:25:41.389531   69202 kubeadm.go:310] 
	I0807 19:25:41.389622   69202 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0807 19:25:41.389693   69202 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0807 19:25:41.389765   69202 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0807 19:25:41.389830   69202 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0807 19:25:41.389837   69202 kubeadm.go:310] 
	I0807 19:25:41.390700   69202 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 19:25:41.390793   69202 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0807 19:25:41.390879   69202 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0807 19:25:41.390962   69202 kubeadm.go:394] duration metric: took 3m57.218884116s to StartCluster
	I0807 19:25:41.391035   69202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0807 19:25:41.391095   69202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0807 19:25:41.432874   69202 cri.go:89] found id: ""
	I0807 19:25:41.432903   69202 logs.go:276] 0 containers: []
	W0807 19:25:41.432916   69202 logs.go:278] No container was found matching "kube-apiserver"
	I0807 19:25:41.432924   69202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0807 19:25:41.432984   69202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0807 19:25:41.471047   69202 cri.go:89] found id: ""
	I0807 19:25:41.471071   69202 logs.go:276] 0 containers: []
	W0807 19:25:41.471077   69202 logs.go:278] No container was found matching "etcd"
	I0807 19:25:41.471083   69202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0807 19:25:41.471130   69202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0807 19:25:41.505996   69202 cri.go:89] found id: ""
	I0807 19:25:41.506032   69202 logs.go:276] 0 containers: []
	W0807 19:25:41.506043   69202 logs.go:278] No container was found matching "coredns"
	I0807 19:25:41.506051   69202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0807 19:25:41.506105   69202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0807 19:25:41.545323   69202 cri.go:89] found id: ""
	I0807 19:25:41.545345   69202 logs.go:276] 0 containers: []
	W0807 19:25:41.545353   69202 logs.go:278] No container was found matching "kube-scheduler"
	I0807 19:25:41.545359   69202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0807 19:25:41.545407   69202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0807 19:25:41.583202   69202 cri.go:89] found id: ""
	I0807 19:25:41.583225   69202 logs.go:276] 0 containers: []
	W0807 19:25:41.583235   69202 logs.go:278] No container was found matching "kube-proxy"
	I0807 19:25:41.583242   69202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0807 19:25:41.583297   69202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0807 19:25:41.622853   69202 cri.go:89] found id: ""
	I0807 19:25:41.622880   69202 logs.go:276] 0 containers: []
	W0807 19:25:41.622891   69202 logs.go:278] No container was found matching "kube-controller-manager"
	I0807 19:25:41.622898   69202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0807 19:25:41.622962   69202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0807 19:25:41.658701   69202 cri.go:89] found id: ""
	I0807 19:25:41.658726   69202 logs.go:276] 0 containers: []
	W0807 19:25:41.658734   69202 logs.go:278] No container was found matching "kindnet"
	I0807 19:25:41.658743   69202 logs.go:123] Gathering logs for kubelet ...
	I0807 19:25:41.658759   69202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0807 19:25:41.717745   69202 logs.go:123] Gathering logs for dmesg ...
	I0807 19:25:41.717778   69202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0807 19:25:41.733015   69202 logs.go:123] Gathering logs for describe nodes ...
	I0807 19:25:41.733042   69202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0807 19:25:41.868273   69202 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0807 19:25:41.868304   69202 logs.go:123] Gathering logs for CRI-O ...
	I0807 19:25:41.868321   69202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0807 19:25:41.977343   69202 logs.go:123] Gathering logs for container status ...
	I0807 19:25:41.977388   69202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0807 19:25:42.025916   69202 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0807 19:25:42.025955   69202 out.go:239] * 
	* 
	W0807 19:25:42.026018   69202 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0807 19:25:42.026053   69202 out.go:239] * 
	* 
	W0807 19:25:42.026906   69202 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0807 19:25:42.030571   69202 out.go:177] 
	W0807 19:25:42.031911   69202 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0807 19:25:42.031979   69202 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0807 19:25:42.032011   69202 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0807 19:25:42.033503   69202 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-235652 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-235652
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-235652: (1.535573708s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-235652 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-235652 status --format={{.Host}}: exit status 7 (65.974451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235652 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-235652 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.509767958s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-235652 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235652 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-235652 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (97.182944ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-235652] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-235652
	    minikube start -p kubernetes-upgrade-235652 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2356522 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-235652 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-235652 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-235652 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.951998728s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-07 19:27:48.308103396 +0000 UTC m=+6712.549967202
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-235652 -n kubernetes-upgrade-235652
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-235652 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-235652 logs -n 25: (1.88339408s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-853483 sudo                 | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                 | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                 | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo find            | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo crio            | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-853483                      | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:24 UTC |
	| start   | -p pause-302295 --memory=2048         | pause-302295              | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:25 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-252907             | running-upgrade-252907    | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:24 UTC |
	| start   | -p cert-options-405893                | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:25 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-493959           | force-systemd-env-493959  | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:24 UTC |
	| start   | -p force-systemd-flag-992969          | force-systemd-flag-992969 | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:26 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-302295                       | pause-302295              | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:26 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-405893 ssh               | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-405893 -- sudo        | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-405893                | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	| start   | -p cert-expiration-260571             | cert-expiration-260571    | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:26 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-235652          | kubernetes-upgrade-235652 | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	| start   | -p kubernetes-upgrade-235652          | kubernetes-upgrade-235652 | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:26 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-992969 ssh cat     | force-systemd-flag-992969 | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC | 07 Aug 24 19:26 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-992969          | force-systemd-flag-992969 | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC | 07 Aug 24 19:26 UTC |
	| start   | -p auto-853483 --memory=3072          | auto-853483               | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-302295                       | pause-302295              | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC | 07 Aug 24 19:26 UTC |
	| start   | -p kindnet-853483                     | kindnet-853483            | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-235652          | kubernetes-upgrade-235652 | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-235652          | kubernetes-upgrade-235652 | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC | 07 Aug 24 19:27 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 19:26:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 19:26:58.396991   77301 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:26:58.397123   77301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:26:58.397134   77301 out.go:304] Setting ErrFile to fd 2...
	I0807 19:26:58.397140   77301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:26:58.397409   77301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 19:26:58.398135   77301 out.go:298] Setting JSON to false
	I0807 19:26:58.399384   77301 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11364,"bootTime":1723047454,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 19:26:58.399460   77301 start.go:139] virtualization: kvm guest
	I0807 19:26:58.401772   77301 out.go:177] * [kubernetes-upgrade-235652] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 19:26:58.403312   77301 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:26:58.403333   77301 notify.go:220] Checking for updates...
	I0807 19:26:58.406154   77301 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:26:58.407663   77301 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 19:26:58.408981   77301 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:26:58.410240   77301 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 19:26:58.411547   77301 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:26:53.854441   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:53.854934   76375 main.go:141] libmachine: (auto-853483) DBG | unable to find current IP address of domain auto-853483 in network mk-auto-853483
	I0807 19:26:53.854959   76375 main.go:141] libmachine: (auto-853483) DBG | I0807 19:26:53.854887   76831 retry.go:31] will retry after 4.18347232s: waiting for machine to come up
	I0807 19:26:58.041338   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.042065   76375 main.go:141] libmachine: (auto-853483) Found IP for machine: 192.168.72.13
	I0807 19:26:58.042106   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has current primary IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.042116   76375 main.go:141] libmachine: (auto-853483) Reserving static IP address...
	I0807 19:26:58.042508   76375 main.go:141] libmachine: (auto-853483) DBG | unable to find host DHCP lease matching {name: "auto-853483", mac: "52:54:00:0e:c2:31", ip: "192.168.72.13"} in network mk-auto-853483
	I0807 19:26:58.121628   76375 main.go:141] libmachine: (auto-853483) DBG | Getting to WaitForSSH function...
	I0807 19:26:58.121655   76375 main.go:141] libmachine: (auto-853483) Reserved static IP address: 192.168.72.13
	I0807 19:26:58.121664   76375 main.go:141] libmachine: (auto-853483) Waiting for SSH to be available...
	I0807 19:26:58.124198   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.124912   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:58.124943   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.125113   76375 main.go:141] libmachine: (auto-853483) DBG | Using SSH client type: external
	I0807 19:26:58.125139   76375 main.go:141] libmachine: (auto-853483) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/auto-853483/id_rsa (-rw-------)
	I0807 19:26:58.125179   76375 main.go:141] libmachine: (auto-853483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/auto-853483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 19:26:58.125194   76375 main.go:141] libmachine: (auto-853483) DBG | About to run SSH command:
	I0807 19:26:58.125223   76375 main.go:141] libmachine: (auto-853483) DBG | exit 0
	I0807 19:26:58.261449   76375 main.go:141] libmachine: (auto-853483) DBG | SSH cmd err, output: <nil>: 
	I0807 19:26:58.261756   76375 main.go:141] libmachine: (auto-853483) KVM machine creation complete!
	I0807 19:26:58.262096   76375 main.go:141] libmachine: (auto-853483) Calling .GetConfigRaw
	I0807 19:26:58.262668   76375 main.go:141] libmachine: (auto-853483) Calling .DriverName
	I0807 19:26:58.262851   76375 main.go:141] libmachine: (auto-853483) Calling .DriverName
	I0807 19:26:58.263001   76375 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 19:26:58.263019   76375 main.go:141] libmachine: (auto-853483) Calling .GetState
	I0807 19:26:58.264721   76375 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 19:26:58.264737   76375 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 19:26:58.264745   76375 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 19:26:58.264753   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:58.267522   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.267869   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:58.267899   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.268074   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:58.268287   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:58.268559   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:58.268750   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:58.268906   76375 main.go:141] libmachine: Using SSH client type: native
	I0807 19:26:58.269081   76375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0807 19:26:58.269095   76375 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 19:26:58.384551   76375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:26:58.384574   76375 main.go:141] libmachine: Detecting the provisioner...
	I0807 19:26:58.384584   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:58.387661   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.388111   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:58.388136   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.388336   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:58.388492   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:58.388609   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:58.388729   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:58.388900   76375 main.go:141] libmachine: Using SSH client type: native
	I0807 19:26:58.389121   76375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0807 19:26:58.389137   76375 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 19:26:58.413068   77301 config.go:182] Loaded profile config "kubernetes-upgrade-235652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0807 19:26:58.413481   77301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:26:58.413535   77301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:26:58.428894   77301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35889
	I0807 19:26:58.429370   77301 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:26:58.429869   77301 main.go:141] libmachine: Using API Version  1
	I0807 19:26:58.429913   77301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:26:58.430268   77301 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:26:58.430435   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:26:58.430699   77301 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:26:58.431027   77301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:26:58.431069   77301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:26:58.445671   77301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39463
	I0807 19:26:58.446040   77301 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:26:58.446523   77301 main.go:141] libmachine: Using API Version  1
	I0807 19:26:58.446546   77301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:26:58.446966   77301 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:26:58.447188   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:26:58.481987   77301 out.go:177] * Using the kvm2 driver based on existing profile
	I0807 19:26:58.483236   77301 start.go:297] selected driver: kvm2
	I0807 19:26:58.483249   77301 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-235652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-235652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:26:58.483356   77301 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:26:58.484292   77301 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:26:58.484379   77301 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 19:26:58.500286   77301 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 19:26:58.500754   77301 cni.go:84] Creating CNI manager for ""
	I0807 19:26:58.500781   77301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:26:58.500835   77301 start.go:340] cluster config:
	{Name:kubernetes-upgrade-235652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-235652 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:26:58.500954   77301 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:26:58.502789   77301 out.go:177] * Starting "kubernetes-upgrade-235652" primary control-plane node in "kubernetes-upgrade-235652" cluster
	I0807 19:26:59.637054   77047 start.go:364] duration metric: took 21.660447901s to acquireMachinesLock for "kindnet-853483"
	I0807 19:26:59.637114   77047 start.go:93] Provisioning new machine with config: &{Name:kindnet-853483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:kindnet-853483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 19:26:59.637216   77047 start.go:125] createHost starting for "" (driver="kvm2")
	I0807 19:26:58.505511   76375 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 19:26:58.505587   76375 main.go:141] libmachine: found compatible host: buildroot
	I0807 19:26:58.505602   76375 main.go:141] libmachine: Provisioning with buildroot...
	I0807 19:26:58.505611   76375 main.go:141] libmachine: (auto-853483) Calling .GetMachineName
	I0807 19:26:58.505822   76375 buildroot.go:166] provisioning hostname "auto-853483"
	I0807 19:26:58.505853   76375 main.go:141] libmachine: (auto-853483) Calling .GetMachineName
	I0807 19:26:58.506021   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:58.508472   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.508813   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:58.508854   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.508943   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:58.509148   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:58.509321   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:58.509472   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:58.509655   76375 main.go:141] libmachine: Using SSH client type: native
	I0807 19:26:58.509850   76375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0807 19:26:58.509872   76375 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-853483 && echo "auto-853483" | sudo tee /etc/hostname
	I0807 19:26:58.635019   76375 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-853483
	
	I0807 19:26:58.635055   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:58.638049   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.638443   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:58.638471   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.638677   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:58.638846   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:58.639040   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:58.639193   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:58.639347   76375 main.go:141] libmachine: Using SSH client type: native
	I0807 19:26:58.639526   76375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0807 19:26:58.639541   76375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-853483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-853483/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-853483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:26:58.761646   76375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:26:58.761692   76375 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 19:26:58.761726   76375 buildroot.go:174] setting up certificates
	I0807 19:26:58.761760   76375 provision.go:84] configureAuth start
	I0807 19:26:58.761774   76375 main.go:141] libmachine: (auto-853483) Calling .GetMachineName
	I0807 19:26:58.762091   76375 main.go:141] libmachine: (auto-853483) Calling .GetIP
	I0807 19:26:58.764743   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.765110   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:58.765144   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.765294   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:58.767262   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.767643   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:58.767668   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.767819   76375 provision.go:143] copyHostCerts
	I0807 19:26:58.767886   76375 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 19:26:58.767899   76375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 19:26:58.767973   76375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 19:26:58.768103   76375 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 19:26:58.768116   76375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 19:26:58.768146   76375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 19:26:58.768262   76375 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 19:26:58.768273   76375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 19:26:58.768300   76375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 19:26:58.768391   76375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.auto-853483 san=[127.0.0.1 192.168.72.13 auto-853483 localhost minikube]
	I0807 19:26:58.899951   76375 provision.go:177] copyRemoteCerts
	I0807 19:26:58.900017   76375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:26:58.900044   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:58.902776   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.903111   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:58.903158   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:58.903333   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:58.903498   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:58.903663   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:58.903821   76375 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/auto-853483/id_rsa Username:docker}
	I0807 19:26:58.993564   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:26:59.023373   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0807 19:26:59.053114   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 19:26:59.084844   76375 provision.go:87] duration metric: took 323.065176ms to configureAuth
	I0807 19:26:59.084881   76375 buildroot.go:189] setting minikube options for container-runtime
	I0807 19:26:59.085098   76375 config.go:182] Loaded profile config "auto-853483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:26:59.085198   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:59.088077   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.088523   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:59.088589   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.088744   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:59.088975   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:59.089170   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:59.089328   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:59.089508   76375 main.go:141] libmachine: Using SSH client type: native
	I0807 19:26:59.089740   76375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0807 19:26:59.089768   76375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 19:26:59.390433   76375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 19:26:59.390464   76375 main.go:141] libmachine: Checking connection to Docker...
	I0807 19:26:59.390475   76375 main.go:141] libmachine: (auto-853483) Calling .GetURL
	I0807 19:26:59.392081   76375 main.go:141] libmachine: (auto-853483) DBG | Using libvirt version 6000000
	I0807 19:26:59.394676   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.395025   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:59.395051   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.395244   76375 main.go:141] libmachine: Docker is up and running!
	I0807 19:26:59.395270   76375 main.go:141] libmachine: Reticulating splines...
	I0807 19:26:59.395278   76375 client.go:171] duration metric: took 24.316882779s to LocalClient.Create
	I0807 19:26:59.395328   76375 start.go:167] duration metric: took 24.316961482s to libmachine.API.Create "auto-853483"
	I0807 19:26:59.395340   76375 start.go:293] postStartSetup for "auto-853483" (driver="kvm2")
	I0807 19:26:59.395353   76375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:26:59.395372   76375 main.go:141] libmachine: (auto-853483) Calling .DriverName
	I0807 19:26:59.395751   76375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:26:59.395778   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:59.398352   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.398741   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:59.398769   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.398889   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:59.399053   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:59.399216   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:59.399418   76375 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/auto-853483/id_rsa Username:docker}
	I0807 19:26:59.483040   76375 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:26:59.487233   76375 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 19:26:59.487261   76375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 19:26:59.487327   76375 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 19:26:59.487429   76375 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 19:26:59.487541   76375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:26:59.496549   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:26:59.521602   76375 start.go:296] duration metric: took 126.248017ms for postStartSetup
	I0807 19:26:59.521660   76375 main.go:141] libmachine: (auto-853483) Calling .GetConfigRaw
	I0807 19:26:59.522295   76375 main.go:141] libmachine: (auto-853483) Calling .GetIP
	I0807 19:26:59.525038   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.525428   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:59.525454   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.525673   76375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/config.json ...
	I0807 19:26:59.525878   76375 start.go:128] duration metric: took 24.472611812s to createHost
	I0807 19:26:59.525900   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:59.528105   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.528462   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:59.528503   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.528656   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:59.528831   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:59.529019   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:59.529202   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:59.529382   76375 main.go:141] libmachine: Using SSH client type: native
	I0807 19:26:59.529562   76375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0807 19:26:59.529575   76375 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 19:26:59.636878   76375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723058819.609760335
	
	I0807 19:26:59.636903   76375 fix.go:216] guest clock: 1723058819.609760335
	I0807 19:26:59.636912   76375 fix.go:229] Guest: 2024-08-07 19:26:59.609760335 +0000 UTC Remote: 2024-08-07 19:26:59.525889386 +0000 UTC m=+56.130649118 (delta=83.870949ms)
	I0807 19:26:59.636958   76375 fix.go:200] guest clock delta is within tolerance: 83.870949ms
	I0807 19:26:59.636964   76375 start.go:83] releasing machines lock for "auto-853483", held for 24.583899289s
	I0807 19:26:59.636998   76375 main.go:141] libmachine: (auto-853483) Calling .DriverName
	I0807 19:26:59.637295   76375 main.go:141] libmachine: (auto-853483) Calling .GetIP
	I0807 19:26:59.640138   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.640563   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:59.640588   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.640729   76375 main.go:141] libmachine: (auto-853483) Calling .DriverName
	I0807 19:26:59.641295   76375 main.go:141] libmachine: (auto-853483) Calling .DriverName
	I0807 19:26:59.641464   76375 main.go:141] libmachine: (auto-853483) Calling .DriverName
	I0807 19:26:59.641591   76375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 19:26:59.641631   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:59.641660   76375 ssh_runner.go:195] Run: cat /version.json
	I0807 19:26:59.641699   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:26:59.644516   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.644811   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.644903   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:59.644931   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.645079   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:59.645187   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:26:59.645208   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:26:59.645217   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:59.645353   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:26:59.645395   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:59.645531   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:26:59.645520   76375 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/auto-853483/id_rsa Username:docker}
	I0807 19:26:59.645636   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:26:59.645744   76375 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/auto-853483/id_rsa Username:docker}
	I0807 19:26:59.755065   76375 ssh_runner.go:195] Run: systemctl --version
	I0807 19:26:59.762000   76375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 19:26:59.936414   76375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 19:26:59.942643   76375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 19:26:59.942729   76375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:26:59.961671   76375 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 19:26:59.961698   76375 start.go:495] detecting cgroup driver to use...
	I0807 19:26:59.961767   76375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 19:26:59.980297   76375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:26:59.996353   76375 docker.go:217] disabling cri-docker service (if available) ...
	I0807 19:26:59.996447   76375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 19:27:00.013494   76375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 19:27:00.028851   76375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 19:27:00.155710   76375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 19:27:00.324064   76375 docker.go:233] disabling docker service ...
	I0807 19:27:00.324131   76375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 19:27:00.344840   76375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 19:27:00.365014   76375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 19:27:00.504586   76375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 19:27:00.635756   76375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 19:27:00.652020   76375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:27:00.673850   76375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 19:27:00.673937   76375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:00.686966   76375 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 19:27:00.687063   76375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:00.699200   76375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:00.711681   76375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:00.723703   76375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:27:00.735763   76375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:00.747316   76375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:00.767713   76375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:00.780643   76375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:27:00.791722   76375 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 19:27:00.791794   76375 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 19:27:00.809545   76375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:27:00.821736   76375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:27:00.960541   76375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 19:27:01.129869   76375 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 19:27:01.129949   76375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 19:27:01.136007   76375 start.go:563] Will wait 60s for crictl version
	I0807 19:27:01.136075   76375 ssh_runner.go:195] Run: which crictl
	I0807 19:27:01.145938   76375 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:27:01.197721   76375 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 19:27:01.197815   76375 ssh_runner.go:195] Run: crio --version
	I0807 19:27:01.227095   76375 ssh_runner.go:195] Run: crio --version
	I0807 19:27:01.268428   76375 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 19:26:59.639540   77047 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0807 19:26:59.639729   77047 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:26:59.639783   77047 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:26:59.660266   77047 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0807 19:26:59.660800   77047 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:26:59.661422   77047 main.go:141] libmachine: Using API Version  1
	I0807 19:26:59.661447   77047 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:26:59.661809   77047 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:26:59.662000   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetMachineName
	I0807 19:26:59.662205   77047 main.go:141] libmachine: (kindnet-853483) Calling .DriverName
	I0807 19:26:59.662399   77047 start.go:159] libmachine.API.Create for "kindnet-853483" (driver="kvm2")
	I0807 19:26:59.662433   77047 client.go:168] LocalClient.Create starting
	I0807 19:26:59.662476   77047 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem
	I0807 19:26:59.662518   77047 main.go:141] libmachine: Decoding PEM data...
	I0807 19:26:59.662541   77047 main.go:141] libmachine: Parsing certificate...
	I0807 19:26:59.662607   77047 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem
	I0807 19:26:59.662632   77047 main.go:141] libmachine: Decoding PEM data...
	I0807 19:26:59.662650   77047 main.go:141] libmachine: Parsing certificate...
	I0807 19:26:59.662672   77047 main.go:141] libmachine: Running pre-create checks...
	I0807 19:26:59.662689   77047 main.go:141] libmachine: (kindnet-853483) Calling .PreCreateCheck
	I0807 19:26:59.663046   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetConfigRaw
	I0807 19:26:59.663484   77047 main.go:141] libmachine: Creating machine...
	I0807 19:26:59.663503   77047 main.go:141] libmachine: (kindnet-853483) Calling .Create
	I0807 19:26:59.663626   77047 main.go:141] libmachine: (kindnet-853483) Creating KVM machine...
	I0807 19:26:59.664968   77047 main.go:141] libmachine: (kindnet-853483) DBG | found existing default KVM network
	I0807 19:26:59.666493   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:26:59.666292   77334 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e6:40:6b} reservation:<nil>}
	I0807 19:26:59.667335   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:26:59.667212   77334 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:aa:8b:62} reservation:<nil>}
	I0807 19:26:59.668491   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:26:59.668416   77334 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000308770}
	I0807 19:26:59.668513   77047 main.go:141] libmachine: (kindnet-853483) DBG | created network xml: 
	I0807 19:26:59.668525   77047 main.go:141] libmachine: (kindnet-853483) DBG | <network>
	I0807 19:26:59.668534   77047 main.go:141] libmachine: (kindnet-853483) DBG |   <name>mk-kindnet-853483</name>
	I0807 19:26:59.668547   77047 main.go:141] libmachine: (kindnet-853483) DBG |   <dns enable='no'/>
	I0807 19:26:59.668567   77047 main.go:141] libmachine: (kindnet-853483) DBG |   
	I0807 19:26:59.668595   77047 main.go:141] libmachine: (kindnet-853483) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0807 19:26:59.668614   77047 main.go:141] libmachine: (kindnet-853483) DBG |     <dhcp>
	I0807 19:26:59.668631   77047 main.go:141] libmachine: (kindnet-853483) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0807 19:26:59.668642   77047 main.go:141] libmachine: (kindnet-853483) DBG |     </dhcp>
	I0807 19:26:59.668650   77047 main.go:141] libmachine: (kindnet-853483) DBG |   </ip>
	I0807 19:26:59.668663   77047 main.go:141] libmachine: (kindnet-853483) DBG |   
	I0807 19:26:59.668724   77047 main.go:141] libmachine: (kindnet-853483) DBG | </network>
	I0807 19:26:59.668744   77047 main.go:141] libmachine: (kindnet-853483) DBG | 
	I0807 19:26:59.674068   77047 main.go:141] libmachine: (kindnet-853483) DBG | trying to create private KVM network mk-kindnet-853483 192.168.61.0/24...
	I0807 19:26:59.743530   77047 main.go:141] libmachine: (kindnet-853483) DBG | private KVM network mk-kindnet-853483 192.168.61.0/24 created
	I0807 19:26:59.743559   77047 main.go:141] libmachine: (kindnet-853483) Setting up store path in /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483 ...
	I0807 19:26:59.743572   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:26:59.743497   77334 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:26:59.743620   77047 main.go:141] libmachine: (kindnet-853483) Building disk image from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 19:26:59.743659   77047 main.go:141] libmachine: (kindnet-853483) Downloading /home/jenkins/minikube-integration/19389-20864/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0807 19:26:59.994207   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:26:59.993998   77334 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/id_rsa...
	I0807 19:27:00.291847   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:00.291700   77334 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/kindnet-853483.rawdisk...
	I0807 19:27:00.291877   77047 main.go:141] libmachine: (kindnet-853483) DBG | Writing magic tar header
	I0807 19:27:00.291891   77047 main.go:141] libmachine: (kindnet-853483) DBG | Writing SSH key tar header
	I0807 19:27:00.291901   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:00.291816   77334 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483 ...
	I0807 19:27:00.291915   77047 main.go:141] libmachine: (kindnet-853483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483
	I0807 19:27:00.291943   77047 main.go:141] libmachine: (kindnet-853483) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483 (perms=drwx------)
	I0807 19:27:00.291958   77047 main.go:141] libmachine: (kindnet-853483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube/machines
	I0807 19:27:00.291970   77047 main.go:141] libmachine: (kindnet-853483) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube/machines (perms=drwxr-xr-x)
	I0807 19:27:00.291985   77047 main.go:141] libmachine: (kindnet-853483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:27:00.292000   77047 main.go:141] libmachine: (kindnet-853483) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864/.minikube (perms=drwxr-xr-x)
	I0807 19:27:00.292028   77047 main.go:141] libmachine: (kindnet-853483) Setting executable bit set on /home/jenkins/minikube-integration/19389-20864 (perms=drwxrwxr-x)
	I0807 19:27:00.292042   77047 main.go:141] libmachine: (kindnet-853483) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0807 19:27:00.292053   77047 main.go:141] libmachine: (kindnet-853483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19389-20864
	I0807 19:27:00.292065   77047 main.go:141] libmachine: (kindnet-853483) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0807 19:27:00.292076   77047 main.go:141] libmachine: (kindnet-853483) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0807 19:27:00.292088   77047 main.go:141] libmachine: (kindnet-853483) Creating domain...
	I0807 19:27:00.292101   77047 main.go:141] libmachine: (kindnet-853483) DBG | Checking permissions on dir: /home/jenkins
	I0807 19:27:00.292115   77047 main.go:141] libmachine: (kindnet-853483) DBG | Checking permissions on dir: /home
	I0807 19:27:00.292126   77047 main.go:141] libmachine: (kindnet-853483) DBG | Skipping /home - not owner
	I0807 19:27:00.293346   77047 main.go:141] libmachine: (kindnet-853483) define libvirt domain using xml: 
	I0807 19:27:00.293372   77047 main.go:141] libmachine: (kindnet-853483) <domain type='kvm'>
	I0807 19:27:00.293383   77047 main.go:141] libmachine: (kindnet-853483)   <name>kindnet-853483</name>
	I0807 19:27:00.293395   77047 main.go:141] libmachine: (kindnet-853483)   <memory unit='MiB'>3072</memory>
	I0807 19:27:00.293404   77047 main.go:141] libmachine: (kindnet-853483)   <vcpu>2</vcpu>
	I0807 19:27:00.293412   77047 main.go:141] libmachine: (kindnet-853483)   <features>
	I0807 19:27:00.293445   77047 main.go:141] libmachine: (kindnet-853483)     <acpi/>
	I0807 19:27:00.293465   77047 main.go:141] libmachine: (kindnet-853483)     <apic/>
	I0807 19:27:00.293471   77047 main.go:141] libmachine: (kindnet-853483)     <pae/>
	I0807 19:27:00.293475   77047 main.go:141] libmachine: (kindnet-853483)     
	I0807 19:27:00.293481   77047 main.go:141] libmachine: (kindnet-853483)   </features>
	I0807 19:27:00.293487   77047 main.go:141] libmachine: (kindnet-853483)   <cpu mode='host-passthrough'>
	I0807 19:27:00.293495   77047 main.go:141] libmachine: (kindnet-853483)   
	I0807 19:27:00.293507   77047 main.go:141] libmachine: (kindnet-853483)   </cpu>
	I0807 19:27:00.293515   77047 main.go:141] libmachine: (kindnet-853483)   <os>
	I0807 19:27:00.293520   77047 main.go:141] libmachine: (kindnet-853483)     <type>hvm</type>
	I0807 19:27:00.293527   77047 main.go:141] libmachine: (kindnet-853483)     <boot dev='cdrom'/>
	I0807 19:27:00.293532   77047 main.go:141] libmachine: (kindnet-853483)     <boot dev='hd'/>
	I0807 19:27:00.293537   77047 main.go:141] libmachine: (kindnet-853483)     <bootmenu enable='no'/>
	I0807 19:27:00.293542   77047 main.go:141] libmachine: (kindnet-853483)   </os>
	I0807 19:27:00.293547   77047 main.go:141] libmachine: (kindnet-853483)   <devices>
	I0807 19:27:00.293554   77047 main.go:141] libmachine: (kindnet-853483)     <disk type='file' device='cdrom'>
	I0807 19:27:00.293562   77047 main.go:141] libmachine: (kindnet-853483)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/boot2docker.iso'/>
	I0807 19:27:00.293569   77047 main.go:141] libmachine: (kindnet-853483)       <target dev='hdc' bus='scsi'/>
	I0807 19:27:00.293580   77047 main.go:141] libmachine: (kindnet-853483)       <readonly/>
	I0807 19:27:00.293589   77047 main.go:141] libmachine: (kindnet-853483)     </disk>
	I0807 19:27:00.293596   77047 main.go:141] libmachine: (kindnet-853483)     <disk type='file' device='disk'>
	I0807 19:27:00.293603   77047 main.go:141] libmachine: (kindnet-853483)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0807 19:27:00.293615   77047 main.go:141] libmachine: (kindnet-853483)       <source file='/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/kindnet-853483.rawdisk'/>
	I0807 19:27:00.293629   77047 main.go:141] libmachine: (kindnet-853483)       <target dev='hda' bus='virtio'/>
	I0807 19:27:00.293640   77047 main.go:141] libmachine: (kindnet-853483)     </disk>
	I0807 19:27:00.293649   77047 main.go:141] libmachine: (kindnet-853483)     <interface type='network'>
	I0807 19:27:00.293658   77047 main.go:141] libmachine: (kindnet-853483)       <source network='mk-kindnet-853483'/>
	I0807 19:27:00.293672   77047 main.go:141] libmachine: (kindnet-853483)       <model type='virtio'/>
	I0807 19:27:00.293682   77047 main.go:141] libmachine: (kindnet-853483)     </interface>
	I0807 19:27:00.293690   77047 main.go:141] libmachine: (kindnet-853483)     <interface type='network'>
	I0807 19:27:00.293701   77047 main.go:141] libmachine: (kindnet-853483)       <source network='default'/>
	I0807 19:27:00.293709   77047 main.go:141] libmachine: (kindnet-853483)       <model type='virtio'/>
	I0807 19:27:00.293720   77047 main.go:141] libmachine: (kindnet-853483)     </interface>
	I0807 19:27:00.293730   77047 main.go:141] libmachine: (kindnet-853483)     <serial type='pty'>
	I0807 19:27:00.293742   77047 main.go:141] libmachine: (kindnet-853483)       <target port='0'/>
	I0807 19:27:00.293756   77047 main.go:141] libmachine: (kindnet-853483)     </serial>
	I0807 19:27:00.293766   77047 main.go:141] libmachine: (kindnet-853483)     <console type='pty'>
	I0807 19:27:00.293774   77047 main.go:141] libmachine: (kindnet-853483)       <target type='serial' port='0'/>
	I0807 19:27:00.293785   77047 main.go:141] libmachine: (kindnet-853483)     </console>
	I0807 19:27:00.293792   77047 main.go:141] libmachine: (kindnet-853483)     <rng model='virtio'>
	I0807 19:27:00.293804   77047 main.go:141] libmachine: (kindnet-853483)       <backend model='random'>/dev/random</backend>
	I0807 19:27:00.293810   77047 main.go:141] libmachine: (kindnet-853483)     </rng>
	I0807 19:27:00.293815   77047 main.go:141] libmachine: (kindnet-853483)     
	I0807 19:27:00.293824   77047 main.go:141] libmachine: (kindnet-853483)     
	I0807 19:27:00.293852   77047 main.go:141] libmachine: (kindnet-853483)   </devices>
	I0807 19:27:00.293870   77047 main.go:141] libmachine: (kindnet-853483) </domain>
	I0807 19:27:00.293881   77047 main.go:141] libmachine: (kindnet-853483) 
	I0807 19:27:00.298270   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:9e:fa:3e in network default
	I0807 19:27:00.298902   77047 main.go:141] libmachine: (kindnet-853483) Ensuring networks are active...
	I0807 19:27:00.298930   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:00.299624   77047 main.go:141] libmachine: (kindnet-853483) Ensuring network default is active
	I0807 19:27:00.299964   77047 main.go:141] libmachine: (kindnet-853483) Ensuring network mk-kindnet-853483 is active
	I0807 19:27:00.300580   77047 main.go:141] libmachine: (kindnet-853483) Getting domain xml...
	I0807 19:27:00.301652   77047 main.go:141] libmachine: (kindnet-853483) Creating domain...
	I0807 19:27:01.724060   77047 main.go:141] libmachine: (kindnet-853483) Waiting to get IP...
	I0807 19:27:01.725247   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:01.725926   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:01.725975   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:01.725902   77334 retry.go:31] will retry after 305.00446ms: waiting for machine to come up
	I0807 19:27:02.032692   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:02.033317   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:02.033357   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:02.033265   77334 retry.go:31] will retry after 251.693312ms: waiting for machine to come up
	I0807 19:27:02.286702   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:02.287502   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:02.287531   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:02.287463   77334 retry.go:31] will retry after 421.702061ms: waiting for machine to come up
	I0807 19:27:02.711197   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:02.711853   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:02.711892   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:02.711805   77334 retry.go:31] will retry after 537.575762ms: waiting for machine to come up
	I0807 19:26:58.504155   77301 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0807 19:26:58.504251   77301 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0807 19:26:58.504269   77301 cache.go:56] Caching tarball of preloaded images
	I0807 19:26:58.504377   77301 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 19:26:58.504391   77301 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0807 19:26:58.504509   77301 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/config.json ...
	I0807 19:26:58.504771   77301 start.go:360] acquireMachinesLock for kubernetes-upgrade-235652: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 19:27:01.269603   76375 main.go:141] libmachine: (auto-853483) Calling .GetIP
	I0807 19:27:01.272969   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:27:01.273432   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:27:01.273460   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:27:01.273783   76375 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0807 19:27:01.278847   76375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:27:01.293313   76375 kubeadm.go:883] updating cluster {Name:auto-853483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:auto-853483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:27:01.293446   76375 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:27:01.293550   76375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:27:01.328593   76375 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0807 19:27:01.328661   76375 ssh_runner.go:195] Run: which lz4
	I0807 19:27:01.333481   76375 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0807 19:27:01.340016   76375 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 19:27:01.340059   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0807 19:27:02.988031   76375 crio.go:462] duration metric: took 1.654595736s to copy over tarball
	I0807 19:27:02.988119   76375 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 19:27:03.251531   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:03.252056   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:03.252086   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:03.252001   77334 retry.go:31] will retry after 559.292592ms: waiting for machine to come up
	I0807 19:27:03.812944   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:03.813485   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:03.813514   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:03.813432   77334 retry.go:31] will retry after 598.081156ms: waiting for machine to come up
	I0807 19:27:04.413334   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:04.413892   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:04.413921   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:04.413826   77334 retry.go:31] will retry after 774.321035ms: waiting for machine to come up
	I0807 19:27:05.190420   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:05.191007   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:05.191029   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:05.190950   77334 retry.go:31] will retry after 1.209344161s: waiting for machine to come up
	I0807 19:27:06.402288   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:06.402834   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:06.402880   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:06.402813   77334 retry.go:31] will retry after 1.51315646s: waiting for machine to come up
	I0807 19:27:05.529433   76375 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.541277685s)
	I0807 19:27:05.529484   76375 crio.go:469] duration metric: took 2.541421039s to extract the tarball
	I0807 19:27:05.529525   76375 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 19:27:05.569375   76375 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:27:05.624180   76375 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:27:05.624219   76375 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:27:05.624230   76375 kubeadm.go:934] updating node { 192.168.72.13 8443 v1.30.3 crio true true} ...
	I0807 19:27:05.624372   76375 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-853483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:auto-853483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:27:05.624461   76375 ssh_runner.go:195] Run: crio config
	I0807 19:27:05.683538   76375 cni.go:84] Creating CNI manager for ""
	I0807 19:27:05.683572   76375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:27:05.683587   76375 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:27:05.683615   76375 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-853483 NodeName:auto-853483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 19:27:05.683781   76375 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-853483"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:27:05.683851   76375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 19:27:05.694253   76375 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:27:05.694325   76375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:27:05.704060   76375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0807 19:27:05.720842   76375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:27:05.737541   76375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0807 19:27:05.755068   76375 ssh_runner.go:195] Run: grep 192.168.72.13	control-plane.minikube.internal$ /etc/hosts
	I0807 19:27:05.759133   76375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:27:05.772095   76375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:27:05.894574   76375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:27:05.916224   76375 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483 for IP: 192.168.72.13
	I0807 19:27:05.916250   76375 certs.go:194] generating shared ca certs ...
	I0807 19:27:05.916265   76375 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:05.916408   76375 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 19:27:05.916447   76375 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 19:27:05.916456   76375 certs.go:256] generating profile certs ...
	I0807 19:27:05.916500   76375 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/client.key
	I0807 19:27:05.916521   76375 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/client.crt with IP's: []
	I0807 19:27:06.038212   76375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/client.crt ...
	I0807 19:27:06.038241   76375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/client.crt: {Name:mk2c4172ea69e8951cb03a21ecf9174350860b3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:06.038429   76375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/client.key ...
	I0807 19:27:06.038443   76375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/client.key: {Name:mk66d146dfd395882328bcd6ae8639dc8eeb5085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:06.038547   76375 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.key.bb97227c
	I0807 19:27:06.038563   76375 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.crt.bb97227c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.13]
	I0807 19:27:06.144466   76375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.crt.bb97227c ...
	I0807 19:27:06.144502   76375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.crt.bb97227c: {Name:mke9d4eaf4e0bf6e04b1e5ce4cdbbb8026826399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:06.144702   76375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.key.bb97227c ...
	I0807 19:27:06.144721   76375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.key.bb97227c: {Name:mk451eb74e0bf4c121d6048aa8491fc538cb5fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:06.144829   76375 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.crt.bb97227c -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.crt
	I0807 19:27:06.144929   76375 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.key.bb97227c -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.key
	I0807 19:27:06.144989   76375 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/proxy-client.key
	I0807 19:27:06.145003   76375 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/proxy-client.crt with IP's: []
	I0807 19:27:06.394453   76375 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/proxy-client.crt ...
	I0807 19:27:06.394482   76375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/proxy-client.crt: {Name:mk992d348dffaa9c5e58330fc9c0ad8c1e9f3132 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:06.394639   76375 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/proxy-client.key ...
	I0807 19:27:06.394650   76375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/proxy-client.key: {Name:mkbb2c7f8f314a251e9b3305b23bc1bc1ceab6e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:06.394811   76375 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 19:27:06.394848   76375 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 19:27:06.394855   76375 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 19:27:06.394876   76375 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:27:06.394897   76375 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:27:06.394922   76375 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 19:27:06.394958   76375 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:27:06.395569   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:27:06.427821   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:27:06.457842   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:27:06.485453   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 19:27:06.512276   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0807 19:27:06.538668   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0807 19:27:06.566998   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:27:06.595016   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 19:27:06.628561   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:27:06.654463   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 19:27:06.696288   76375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 19:27:06.727213   76375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:27:06.747576   76375 ssh_runner.go:195] Run: openssl version
	I0807 19:27:06.754028   76375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 19:27:06.766147   76375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 19:27:06.771577   76375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:27:06.771641   76375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 19:27:06.780195   76375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:27:06.795808   76375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:27:06.810553   76375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:27:06.815447   76375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:27:06.815514   76375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:27:06.821443   76375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:27:06.832174   76375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 19:27:06.843089   76375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 19:27:06.848318   76375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:27:06.848380   76375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 19:27:06.854590   76375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 19:27:06.866843   76375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:27:06.871456   76375 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 19:27:06.871521   76375 kubeadm.go:392] StartCluster: {Name:auto-853483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:auto-853483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:27:06.871593   76375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 19:27:06.871651   76375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:27:06.912524   76375 cri.go:89] found id: ""
	I0807 19:27:06.912595   76375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 19:27:06.922876   76375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 19:27:06.933264   76375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 19:27:06.943550   76375 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 19:27:06.943569   76375 kubeadm.go:157] found existing configuration files:
	
	I0807 19:27:06.943619   76375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 19:27:06.953602   76375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 19:27:06.953670   76375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 19:27:06.963685   76375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 19:27:06.974355   76375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 19:27:06.974426   76375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 19:27:06.984056   76375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 19:27:06.994101   76375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 19:27:06.994171   76375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 19:27:07.004919   76375 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 19:27:07.015571   76375 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 19:27:07.015644   76375 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 19:27:07.025470   76375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 19:27:07.231247   76375 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 19:27:07.917407   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:07.917966   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:07.917992   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:07.917914   77334 retry.go:31] will retry after 2.195575336s: waiting for machine to come up
	I0807 19:27:10.114835   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:10.115287   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:10.115311   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:10.115240   77334 retry.go:31] will retry after 1.809277461s: waiting for machine to come up
	I0807 19:27:11.926500   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:11.926958   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:11.926986   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:11.926897   77334 retry.go:31] will retry after 2.580843328s: waiting for machine to come up
	I0807 19:27:17.621036   76375 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 19:27:17.621105   76375 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 19:27:17.621200   76375 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 19:27:17.621341   76375 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 19:27:17.621431   76375 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 19:27:17.621485   76375 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 19:27:17.623193   76375 out.go:204]   - Generating certificates and keys ...
	I0807 19:27:17.623284   76375 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 19:27:17.623373   76375 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 19:27:17.623563   76375 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 19:27:17.623637   76375 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 19:27:17.623719   76375 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 19:27:17.623786   76375 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 19:27:17.623866   76375 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 19:27:17.624040   76375 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-853483 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
	I0807 19:27:17.624119   76375 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 19:27:17.624288   76375 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-853483 localhost] and IPs [192.168.72.13 127.0.0.1 ::1]
	I0807 19:27:17.624389   76375 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 19:27:17.624488   76375 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 19:27:17.624532   76375 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 19:27:17.624582   76375 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 19:27:17.624628   76375 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 19:27:17.624684   76375 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 19:27:17.624734   76375 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 19:27:17.624823   76375 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 19:27:17.624899   76375 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 19:27:17.625019   76375 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 19:27:17.625118   76375 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 19:27:17.626644   76375 out.go:204]   - Booting up control plane ...
	I0807 19:27:17.626742   76375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 19:27:17.626836   76375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 19:27:17.626926   76375 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 19:27:17.627048   76375 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 19:27:17.627185   76375 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 19:27:17.627263   76375 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 19:27:17.627447   76375 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 19:27:17.627541   76375 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 19:27:17.627622   76375 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.096195ms
	I0807 19:27:17.627717   76375 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 19:27:17.627795   76375 kubeadm.go:310] [api-check] The API server is healthy after 5.002988699s
	I0807 19:27:17.627938   76375 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 19:27:17.628093   76375 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 19:27:17.628174   76375 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 19:27:17.628412   76375 kubeadm.go:310] [mark-control-plane] Marking the node auto-853483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 19:27:17.628473   76375 kubeadm.go:310] [bootstrap-token] Using token: qndbpl.u8bc6ldhwhybmq48
	I0807 19:27:17.629928   76375 out.go:204]   - Configuring RBAC rules ...
	I0807 19:27:17.630035   76375 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 19:27:17.630124   76375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 19:27:17.630259   76375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 19:27:17.630370   76375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 19:27:17.630504   76375 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 19:27:17.630607   76375 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 19:27:17.630775   76375 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 19:27:17.630858   76375 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 19:27:17.630901   76375 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 19:27:17.630913   76375 kubeadm.go:310] 
	I0807 19:27:17.630962   76375 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 19:27:17.630968   76375 kubeadm.go:310] 
	I0807 19:27:17.631035   76375 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 19:27:17.631044   76375 kubeadm.go:310] 
	I0807 19:27:17.631113   76375 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 19:27:17.631202   76375 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 19:27:17.631281   76375 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 19:27:17.631291   76375 kubeadm.go:310] 
	I0807 19:27:17.631370   76375 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 19:27:17.631383   76375 kubeadm.go:310] 
	I0807 19:27:17.631451   76375 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 19:27:17.631463   76375 kubeadm.go:310] 
	I0807 19:27:17.631546   76375 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 19:27:17.631615   76375 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 19:27:17.631700   76375 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 19:27:17.631714   76375 kubeadm.go:310] 
	I0807 19:27:17.631818   76375 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 19:27:17.631888   76375 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 19:27:17.631895   76375 kubeadm.go:310] 
	I0807 19:27:17.631956   76375 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qndbpl.u8bc6ldhwhybmq48 \
	I0807 19:27:17.632085   76375 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 \
	I0807 19:27:17.632110   76375 kubeadm.go:310] 	--control-plane 
	I0807 19:27:17.632116   76375 kubeadm.go:310] 
	I0807 19:27:17.632192   76375 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 19:27:17.632224   76375 kubeadm.go:310] 
	I0807 19:27:17.632334   76375 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qndbpl.u8bc6ldhwhybmq48 \
	I0807 19:27:17.632516   76375 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 
	I0807 19:27:17.632537   76375 cni.go:84] Creating CNI manager for ""
	I0807 19:27:17.632545   76375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:27:17.634479   76375 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0807 19:27:14.509369   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:14.509921   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:14.509943   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:14.509863   77334 retry.go:31] will retry after 4.41557475s: waiting for machine to come up
	I0807 19:27:17.635893   76375 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0807 19:27:17.648113   76375 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0807 19:27:17.668596   76375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 19:27:17.668710   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:17.668741   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-853483 minikube.k8s.io/updated_at=2024_08_07T19_27_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=auto-853483 minikube.k8s.io/primary=true
	I0807 19:27:17.829911   76375 ops.go:34] apiserver oom_adj: -16
	I0807 19:27:17.834810   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:18.335430   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:18.930085   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:18.930638   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find current IP address of domain kindnet-853483 in network mk-kindnet-853483
	I0807 19:27:18.930660   77047 main.go:141] libmachine: (kindnet-853483) DBG | I0807 19:27:18.930601   77334 retry.go:31] will retry after 3.546020089s: waiting for machine to come up
	I0807 19:27:22.478666   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.479337   77047 main.go:141] libmachine: (kindnet-853483) Found IP for machine: 192.168.61.166
	I0807 19:27:22.479364   77047 main.go:141] libmachine: (kindnet-853483) Reserving static IP address...
	I0807 19:27:22.479378   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has current primary IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.479736   77047 main.go:141] libmachine: (kindnet-853483) DBG | unable to find host DHCP lease matching {name: "kindnet-853483", mac: "52:54:00:40:0b:a9", ip: "192.168.61.166"} in network mk-kindnet-853483
	I0807 19:27:22.557202   77047 main.go:141] libmachine: (kindnet-853483) Reserved static IP address: 192.168.61.166
	I0807 19:27:22.557231   77047 main.go:141] libmachine: (kindnet-853483) DBG | Getting to WaitForSSH function...
	I0807 19:27:22.557239   77047 main.go:141] libmachine: (kindnet-853483) Waiting for SSH to be available...
	I0807 19:27:22.560183   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.560676   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:22.560706   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.560850   77047 main.go:141] libmachine: (kindnet-853483) DBG | Using SSH client type: external
	I0807 19:27:22.560869   77047 main.go:141] libmachine: (kindnet-853483) DBG | Using SSH private key: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/id_rsa (-rw-------)
	I0807 19:27:22.560898   77047 main.go:141] libmachine: (kindnet-853483) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0807 19:27:22.560907   77047 main.go:141] libmachine: (kindnet-853483) DBG | About to run SSH command:
	I0807 19:27:22.560915   77047 main.go:141] libmachine: (kindnet-853483) DBG | exit 0
	I0807 19:27:22.688394   77047 main.go:141] libmachine: (kindnet-853483) DBG | SSH cmd err, output: <nil>: 
	I0807 19:27:22.688800   77047 main.go:141] libmachine: (kindnet-853483) KVM machine creation complete!
	I0807 19:27:22.689244   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetConfigRaw
	I0807 19:27:22.689777   77047 main.go:141] libmachine: (kindnet-853483) Calling .DriverName
	I0807 19:27:22.689961   77047 main.go:141] libmachine: (kindnet-853483) Calling .DriverName
	I0807 19:27:22.690138   77047 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0807 19:27:22.690156   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetState
	I0807 19:27:22.691701   77047 main.go:141] libmachine: Detecting operating system of created instance...
	I0807 19:27:22.691718   77047 main.go:141] libmachine: Waiting for SSH to be available...
	I0807 19:27:22.691725   77047 main.go:141] libmachine: Getting to WaitForSSH function...
	I0807 19:27:22.691734   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:22.694803   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.695213   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:22.695254   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.695437   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:22.695632   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:22.695796   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:22.695910   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:22.696026   77047 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:22.696214   77047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0807 19:27:22.696227   77047 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0807 19:27:22.803610   77047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:27:22.803647   77047 main.go:141] libmachine: Detecting the provisioner...
	I0807 19:27:22.803659   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:22.806586   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.806915   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:22.806950   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.807133   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:22.807332   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:22.807486   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:22.807636   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:22.807791   77047 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:22.808004   77047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0807 19:27:22.808020   77047 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0807 19:27:18.835103   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:19.335888   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:19.835786   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:20.335197   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:20.835232   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:21.335276   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:21.834869   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:22.335554   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:22.835255   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:23.335366   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:24.041232   77301 start.go:364] duration metric: took 25.536425118s to acquireMachinesLock for "kubernetes-upgrade-235652"
	I0807 19:27:24.041279   77301 start.go:96] Skipping create...Using existing machine configuration
	I0807 19:27:24.041296   77301 fix.go:54] fixHost starting: 
	I0807 19:27:24.041715   77301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:27:24.041770   77301 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:27:24.061845   77301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I0807 19:27:24.062242   77301 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:27:24.062774   77301 main.go:141] libmachine: Using API Version  1
	I0807 19:27:24.062814   77301 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:27:24.063178   77301 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:27:24.063356   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:27:24.063519   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetState
	I0807 19:27:24.065103   77301 fix.go:112] recreateIfNeeded on kubernetes-upgrade-235652: state=Running err=<nil>
	W0807 19:27:24.065124   77301 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 19:27:24.066971   77301 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-235652" VM ...
	I0807 19:27:22.917735   77047 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0807 19:27:22.917821   77047 main.go:141] libmachine: found compatible host: buildroot
	I0807 19:27:22.917864   77047 main.go:141] libmachine: Provisioning with buildroot...
	I0807 19:27:22.917878   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetMachineName
	I0807 19:27:22.918142   77047 buildroot.go:166] provisioning hostname "kindnet-853483"
	I0807 19:27:22.918171   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetMachineName
	I0807 19:27:22.918383   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:22.920946   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.921241   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:22.921271   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:22.921521   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:22.921737   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:22.921993   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:22.922173   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:22.922388   77047 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:22.922543   77047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0807 19:27:22.922557   77047 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-853483 && echo "kindnet-853483" | sudo tee /etc/hostname
	I0807 19:27:23.043274   77047 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-853483
	
	I0807 19:27:23.043297   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:23.046496   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.047030   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:23.047060   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.047254   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:23.047426   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:23.047612   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:23.047769   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:23.047989   77047 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:23.048175   77047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0807 19:27:23.048218   77047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-853483' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-853483/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-853483' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:27:23.172365   77047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:27:23.172392   77047 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 19:27:23.172429   77047 buildroot.go:174] setting up certificates
	I0807 19:27:23.172444   77047 provision.go:84] configureAuth start
	I0807 19:27:23.172462   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetMachineName
	I0807 19:27:23.172800   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetIP
	I0807 19:27:23.175824   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.176261   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:23.176312   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.176456   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:23.178723   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.179104   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:23.179127   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.179288   77047 provision.go:143] copyHostCerts
	I0807 19:27:23.179350   77047 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 19:27:23.179363   77047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 19:27:23.179426   77047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 19:27:23.179561   77047 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 19:27:23.179575   77047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 19:27:23.179606   77047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 19:27:23.179684   77047 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 19:27:23.179695   77047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 19:27:23.179721   77047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 19:27:23.179788   77047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.kindnet-853483 san=[127.0.0.1 192.168.61.166 kindnet-853483 localhost minikube]
	I0807 19:27:23.331104   77047 provision.go:177] copyRemoteCerts
	I0807 19:27:23.331182   77047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:27:23.331206   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:23.334215   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.334672   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:23.334700   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.334911   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:23.335107   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:23.335329   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:23.335479   77047 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/id_rsa Username:docker}
	I0807 19:27:23.423258   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:27:23.450745   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0807 19:27:23.476801   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0807 19:27:23.501831   77047 provision.go:87] duration metric: took 329.372796ms to configureAuth
	I0807 19:27:23.501855   77047 buildroot.go:189] setting minikube options for container-runtime
	I0807 19:27:23.502038   77047 config.go:182] Loaded profile config "kindnet-853483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:27:23.502106   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:23.504833   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.505228   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:23.505257   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.505540   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:23.505749   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:23.505918   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:23.506038   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:23.506191   77047 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:23.506351   77047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0807 19:27:23.506364   77047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 19:27:23.787207   77047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 19:27:23.787237   77047 main.go:141] libmachine: Checking connection to Docker...
	I0807 19:27:23.787248   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetURL
	I0807 19:27:23.788784   77047 main.go:141] libmachine: (kindnet-853483) DBG | Using libvirt version 6000000
	I0807 19:27:23.791142   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.791476   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:23.791502   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.791695   77047 main.go:141] libmachine: Docker is up and running!
	I0807 19:27:23.791712   77047 main.go:141] libmachine: Reticulating splines...
	I0807 19:27:23.791720   77047 client.go:171] duration metric: took 24.129273573s to LocalClient.Create
	I0807 19:27:23.791745   77047 start.go:167] duration metric: took 24.129348033s to libmachine.API.Create "kindnet-853483"
	I0807 19:27:23.791755   77047 start.go:293] postStartSetup for "kindnet-853483" (driver="kvm2")
	I0807 19:27:23.791763   77047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:27:23.791778   77047 main.go:141] libmachine: (kindnet-853483) Calling .DriverName
	I0807 19:27:23.791994   77047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:27:23.792031   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:23.794290   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.794593   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:23.794623   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.794784   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:23.794977   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:23.795168   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:23.795319   77047 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/id_rsa Username:docker}
	I0807 19:27:23.879929   77047 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:27:23.884481   77047 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 19:27:23.884508   77047 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 19:27:23.884585   77047 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 19:27:23.884685   77047 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 19:27:23.884798   77047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:27:23.895396   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:27:23.923566   77047 start.go:296] duration metric: took 131.790647ms for postStartSetup
	I0807 19:27:23.923610   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetConfigRaw
	I0807 19:27:23.924511   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetIP
	I0807 19:27:23.928478   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.928821   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:23.928850   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.929153   77047 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/config.json ...
	I0807 19:27:23.929378   77047 start.go:128] duration metric: took 24.292150079s to createHost
	I0807 19:27:23.929424   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:23.931815   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.932072   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:23.932106   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:23.932299   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:23.932492   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:23.932637   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:23.932773   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:23.932926   77047 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:23.933094   77047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.166 22 <nil> <nil>}
	I0807 19:27:23.933107   77047 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 19:27:24.041017   77047 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723058844.012803053
	
	I0807 19:27:24.041044   77047 fix.go:216] guest clock: 1723058844.012803053
	I0807 19:27:24.041066   77047 fix.go:229] Guest: 2024-08-07 19:27:24.012803053 +0000 UTC Remote: 2024-08-07 19:27:23.929393817 +0000 UTC m=+46.081072901 (delta=83.409236ms)
	I0807 19:27:24.041132   77047 fix.go:200] guest clock delta is within tolerance: 83.409236ms
	I0807 19:27:24.041140   77047 start.go:83] releasing machines lock for "kindnet-853483", held for 24.404058804s
	I0807 19:27:24.041173   77047 main.go:141] libmachine: (kindnet-853483) Calling .DriverName
	I0807 19:27:24.041607   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetIP
	I0807 19:27:24.044927   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:24.045344   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:24.045373   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:24.045590   77047 main.go:141] libmachine: (kindnet-853483) Calling .DriverName
	I0807 19:27:24.046171   77047 main.go:141] libmachine: (kindnet-853483) Calling .DriverName
	I0807 19:27:24.046357   77047 main.go:141] libmachine: (kindnet-853483) Calling .DriverName
	I0807 19:27:24.046430   77047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 19:27:24.046477   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:24.046566   77047 ssh_runner.go:195] Run: cat /version.json
	I0807 19:27:24.046594   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHHostname
	I0807 19:27:24.049572   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:24.049769   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:24.050008   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:24.050030   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:24.050112   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:24.050312   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:24.050469   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:24.050556   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:24.050618   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:24.050682   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHPort
	I0807 19:27:24.050746   77047 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/id_rsa Username:docker}
	I0807 19:27:24.050864   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHKeyPath
	I0807 19:27:24.051041   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetSSHUsername
	I0807 19:27:24.051206   77047 sshutil.go:53] new ssh client: &{IP:192.168.61.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kindnet-853483/id_rsa Username:docker}
	I0807 19:27:24.158151   77047 ssh_runner.go:195] Run: systemctl --version
	I0807 19:27:24.164445   77047 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 19:27:24.325511   77047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 19:27:24.332491   77047 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 19:27:24.332569   77047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:27:24.352951   77047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0807 19:27:24.352982   77047 start.go:495] detecting cgroup driver to use...
	I0807 19:27:24.353059   77047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 19:27:24.371010   77047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:27:24.387843   77047 docker.go:217] disabling cri-docker service (if available) ...
	I0807 19:27:24.387918   77047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 19:27:24.403578   77047 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 19:27:24.418525   77047 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 19:27:24.542208   77047 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 19:27:24.712256   77047 docker.go:233] disabling docker service ...
	I0807 19:27:24.712322   77047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 19:27:24.730734   77047 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 19:27:24.748054   77047 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 19:27:24.895570   77047 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 19:27:25.021490   77047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 19:27:25.037496   77047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:27:25.058348   77047 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 19:27:25.058392   77047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:25.073015   77047 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 19:27:25.073077   77047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:25.085562   77047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:25.097147   77047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:25.109750   77047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:27:25.122758   77047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:25.134784   77047 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:25.153592   77047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:25.166104   77047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:27:25.177672   77047 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0807 19:27:25.177755   77047 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0807 19:27:25.194186   77047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:27:25.211001   77047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:27:25.348812   77047 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 19:27:25.503410   77047 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 19:27:25.503529   77047 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 19:27:25.508888   77047 start.go:563] Will wait 60s for crictl version
	I0807 19:27:25.508955   77047 ssh_runner.go:195] Run: which crictl
	I0807 19:27:25.512933   77047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:27:25.553957   77047 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 19:27:25.554058   77047 ssh_runner.go:195] Run: crio --version
	I0807 19:27:25.582974   77047 ssh_runner.go:195] Run: crio --version
	I0807 19:27:25.619057   77047 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 19:27:25.620328   77047 main.go:141] libmachine: (kindnet-853483) Calling .GetIP
	I0807 19:27:25.623280   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:25.623631   77047 main.go:141] libmachine: (kindnet-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:0b:a9", ip: ""} in network mk-kindnet-853483: {Iface:virbr1 ExpiryTime:2024-08-07 20:27:15 +0000 UTC Type:0 Mac:52:54:00:40:0b:a9 Iaid: IPaddr:192.168.61.166 Prefix:24 Hostname:kindnet-853483 Clientid:01:52:54:00:40:0b:a9}
	I0807 19:27:25.623662   77047 main.go:141] libmachine: (kindnet-853483) DBG | domain kindnet-853483 has defined IP address 192.168.61.166 and MAC address 52:54:00:40:0b:a9 in network mk-kindnet-853483
	I0807 19:27:25.623817   77047 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0807 19:27:25.628097   77047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:27:25.642668   77047 kubeadm.go:883] updating cluster {Name:kindnet-853483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:kindnet-853483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:27:25.642762   77047 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:27:25.642818   77047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:27:25.683667   77047 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0807 19:27:25.683743   77047 ssh_runner.go:195] Run: which lz4
	I0807 19:27:25.687983   77047 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0807 19:27:25.692500   77047 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0807 19:27:25.692531   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0807 19:27:27.178457   77047 crio.go:462] duration metric: took 1.490510109s to copy over tarball
	I0807 19:27:27.178546   77047 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0807 19:27:24.068548   77301 machine.go:94] provisionDockerMachine start ...
	I0807 19:27:24.068570   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:27:24.068776   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:24.071977   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.072537   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:24.072578   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.072815   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:27:24.073024   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:24.073231   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:24.073418   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:27:24.073677   77301 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:24.073909   77301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:27:24.073935   77301 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 19:27:24.193199   77301 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-235652
	
	I0807 19:27:24.193233   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetMachineName
	I0807 19:27:24.193502   77301 buildroot.go:166] provisioning hostname "kubernetes-upgrade-235652"
	I0807 19:27:24.193528   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetMachineName
	I0807 19:27:24.193727   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:24.197107   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.197619   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:24.197649   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.197860   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:27:24.198069   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:24.198229   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:24.198331   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:27:24.198463   77301 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:24.198685   77301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:27:24.198704   77301 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-235652 && echo "kubernetes-upgrade-235652" | sudo tee /etc/hostname
	I0807 19:27:24.338585   77301 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-235652
	
	I0807 19:27:24.338620   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:24.342319   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.342683   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:24.342716   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.342868   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:27:24.343087   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:24.343258   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:24.343444   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:27:24.343633   77301 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:24.343844   77301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:27:24.343863   77301 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-235652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-235652/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-235652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:27:24.466933   77301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:27:24.466966   77301 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 19:27:24.467039   77301 buildroot.go:174] setting up certificates
	I0807 19:27:24.467055   77301 provision.go:84] configureAuth start
	I0807 19:27:24.467069   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetMachineName
	I0807 19:27:24.467366   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetIP
	I0807 19:27:24.470480   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.470884   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:24.470929   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.471030   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:24.473774   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.474205   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:24.474237   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.474539   77301 provision.go:143] copyHostCerts
	I0807 19:27:24.474618   77301 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 19:27:24.474630   77301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 19:27:24.474681   77301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 19:27:24.474832   77301 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 19:27:24.474844   77301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 19:27:24.474870   77301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 19:27:24.474936   77301 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 19:27:24.474944   77301 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 19:27:24.474962   77301 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 19:27:24.475010   77301 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-235652 san=[127.0.0.1 192.168.50.208 kubernetes-upgrade-235652 localhost minikube]
	I0807 19:27:24.874296   77301 provision.go:177] copyRemoteCerts
	I0807 19:27:24.874367   77301 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:27:24.874409   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:24.877654   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.878224   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:24.878268   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:24.878310   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:27:24.878508   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:24.878664   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:27:24.878796   77301 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa Username:docker}
	I0807 19:27:24.973942   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:27:25.001591   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0807 19:27:25.027935   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 19:27:25.056109   77301 provision.go:87] duration metric: took 589.037149ms to configureAuth
	I0807 19:27:25.056142   77301 buildroot.go:189] setting minikube options for container-runtime
	I0807 19:27:25.056373   77301 config.go:182] Loaded profile config "kubernetes-upgrade-235652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0807 19:27:25.056482   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:25.059689   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:25.060075   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:25.060105   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:25.060288   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:27:25.060486   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:25.060626   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:25.060751   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:27:25.060968   77301 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:25.061173   77301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:27:25.061199   77301 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 19:27:23.835572   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:24.335268   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:24.835810   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:25.335863   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:25.835855   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:26.335572   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:26.835646   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:27.335624   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:27.835713   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:28.334973   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:28.835848   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:29.335507   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:29.835480   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:30.335036   76375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:30.647011   76375 kubeadm.go:1113] duration metric: took 12.978366789s to wait for elevateKubeSystemPrivileges
	I0807 19:27:30.647051   76375 kubeadm.go:394] duration metric: took 23.775535177s to StartCluster
	I0807 19:27:30.647075   76375 settings.go:142] acquiring lock: {Name:mke44792daf8192c7cb4430e19df00c0686edd5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:30.647167   76375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 19:27:30.648899   76375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/kubeconfig: {Name:mk9a4ad53bf4447453626a7769211592f39f92fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:30.741307   76375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0807 19:27:30.741323   76375 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0807 19:27:30.741487   76375 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0807 19:27:30.741563   76375 addons.go:69] Setting storage-provisioner=true in profile "auto-853483"
	I0807 19:27:30.741591   76375 addons.go:69] Setting default-storageclass=true in profile "auto-853483"
	I0807 19:27:30.741622   76375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-853483"
	I0807 19:27:30.741665   76375 config.go:182] Loaded profile config "auto-853483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:27:30.741594   76375 addons.go:234] Setting addon storage-provisioner=true in "auto-853483"
	I0807 19:27:30.741749   76375 host.go:66] Checking if "auto-853483" exists ...
	I0807 19:27:30.742145   76375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:27:30.742151   76375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:27:30.742188   76375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:27:30.742207   76375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:27:30.759162   76375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0807 19:27:30.759698   76375 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:27:30.760320   76375 main.go:141] libmachine: Using API Version  1
	I0807 19:27:30.760342   76375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:27:30.760747   76375 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:27:30.760948   76375 main.go:141] libmachine: (auto-853483) Calling .GetState
	I0807 19:27:30.761673   76375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0807 19:27:30.762076   76375 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:27:30.762700   76375 main.go:141] libmachine: Using API Version  1
	I0807 19:27:30.762725   76375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:27:30.763085   76375 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:27:30.763535   76375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:27:30.763566   76375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:27:30.779819   76375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0807 19:27:30.780276   76375 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:27:30.780746   76375 main.go:141] libmachine: Using API Version  1
	I0807 19:27:30.780771   76375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:27:30.781082   76375 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:27:30.781415   76375 main.go:141] libmachine: (auto-853483) Calling .GetState
	I0807 19:27:30.783122   76375 main.go:141] libmachine: (auto-853483) Calling .DriverName
	I0807 19:27:30.868399   76375 addons.go:234] Setting addon default-storageclass=true in "auto-853483"
	I0807 19:27:30.868446   76375 host.go:66] Checking if "auto-853483" exists ...
	I0807 19:27:30.868816   76375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:27:30.868891   76375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:27:30.870405   76375 out.go:177] * Verifying Kubernetes components...
	I0807 19:27:30.885789   76375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39045
	I0807 19:27:30.886315   76375 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:27:30.886809   76375 main.go:141] libmachine: Using API Version  1
	I0807 19:27:30.886834   76375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:27:30.887241   76375 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:27:30.887702   76375 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:27:30.887729   76375 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:27:30.904389   76375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0807 19:27:30.904831   76375 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:27:30.905300   76375 main.go:141] libmachine: Using API Version  1
	I0807 19:27:30.905327   76375 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:27:30.905661   76375 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:27:30.905841   76375 main.go:141] libmachine: (auto-853483) Calling .GetState
	I0807 19:27:30.907395   76375 main.go:141] libmachine: (auto-853483) Calling .DriverName
	I0807 19:27:30.907606   76375 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0807 19:27:30.907620   76375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0807 19:27:30.907639   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:27:30.910043   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:27:30.910420   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:27:30.910445   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:27:30.910569   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:27:30.910729   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:27:30.910884   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:27:30.911041   76375 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/auto-853483/id_rsa Username:docker}
	I0807 19:27:30.930395   76375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:27:30.930478   76375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0807 19:27:29.545855   77047 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.367276758s)
	I0807 19:27:29.545893   77047 crio.go:469] duration metric: took 2.367400183s to extract the tarball
	I0807 19:27:29.545901   77047 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0807 19:27:29.591592   77047 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:27:29.634752   77047 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:27:29.634781   77047 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:27:29.634792   77047 kubeadm.go:934] updating node { 192.168.61.166 8443 v1.30.3 crio true true} ...
	I0807 19:27:29.634936   77047 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-853483 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:kindnet-853483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0807 19:27:29.635004   77047 ssh_runner.go:195] Run: crio config
	I0807 19:27:29.694383   77047 cni.go:84] Creating CNI manager for "kindnet"
	I0807 19:27:29.694404   77047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:27:29.694426   77047 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.166 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-853483 NodeName:kindnet-853483 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 19:27:29.694568   77047 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-853483"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:27:29.694637   77047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 19:27:29.705429   77047 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:27:29.705515   77047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:27:29.715988   77047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0807 19:27:29.734667   77047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:27:29.753338   77047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0807 19:27:29.772128   77047 ssh_runner.go:195] Run: grep 192.168.61.166	control-plane.minikube.internal$ /etc/hosts
	I0807 19:27:29.776478   77047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.166	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0807 19:27:29.790779   77047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:27:29.933738   77047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:27:29.953068   77047 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483 for IP: 192.168.61.166
	I0807 19:27:29.953087   77047 certs.go:194] generating shared ca certs ...
	I0807 19:27:29.953101   77047 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:29.953253   77047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 19:27:29.953309   77047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 19:27:29.953323   77047 certs.go:256] generating profile certs ...
	I0807 19:27:29.953380   77047 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/client.key
	I0807 19:27:29.953397   77047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/client.crt with IP's: []
	I0807 19:27:30.008827   77047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/client.crt ...
	I0807 19:27:30.008855   77047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/client.crt: {Name:mkc1f559dc917d5b9b50012264f23c9d61da7b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:30.009035   77047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/client.key ...
	I0807 19:27:30.009049   77047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/client.key: {Name:mke78c21f70b2851fe155c0a3a57ec104d52cd0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:30.009159   77047 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.key.20ff8fe3
	I0807 19:27:30.009178   77047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.crt.20ff8fe3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.166]
	I0807 19:27:30.106881   77047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.crt.20ff8fe3 ...
	I0807 19:27:30.106914   77047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.crt.20ff8fe3: {Name:mk9e65b13fc3521695e428eb3d6ba77bd8af6bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:30.107087   77047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.key.20ff8fe3 ...
	I0807 19:27:30.107103   77047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.key.20ff8fe3: {Name:mkb48de341c974683b97a7687d91b0f9ce9f7dea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:30.107177   77047 certs.go:381] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.crt.20ff8fe3 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.crt
	I0807 19:27:30.107249   77047 certs.go:385] copying /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.key.20ff8fe3 -> /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.key
	I0807 19:27:30.107300   77047 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/proxy-client.key
	I0807 19:27:30.107310   77047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/proxy-client.crt with IP's: []
	I0807 19:27:30.187361   77047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/proxy-client.crt ...
	I0807 19:27:30.187395   77047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/proxy-client.crt: {Name:mk4cbb5de6fc3b8b29f3eadbfaf970d45b3a1827 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:30.187557   77047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/proxy-client.key ...
	I0807 19:27:30.187568   77047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/proxy-client.key: {Name:mk7e0beb7f2146a0d32f9c9e63936f94138d71ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:30.187746   77047 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 19:27:30.187783   77047 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 19:27:30.187790   77047 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 19:27:30.187809   77047 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:27:30.187837   77047 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:27:30.187858   77047 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 19:27:30.187897   77047 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:27:30.188519   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:27:30.214156   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:27:30.242003   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:27:30.273383   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 19:27:30.301604   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0807 19:27:30.334011   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 19:27:30.363144   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:27:30.398433   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kindnet-853483/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 19:27:30.428861   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:27:30.455534   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 19:27:30.480425   77047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 19:27:30.504890   77047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:27:30.521544   77047 ssh_runner.go:195] Run: openssl version
	I0807 19:27:30.527833   77047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 19:27:30.540098   77047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 19:27:30.545101   77047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:27:30.545174   77047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 19:27:30.551755   77047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:27:30.564092   77047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:27:30.575714   77047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:27:30.580319   77047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:27:30.580381   77047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:27:30.586711   77047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:27:30.598659   77047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 19:27:30.610507   77047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 19:27:30.615650   77047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:27:30.615700   77047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 19:27:30.622094   77047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 19:27:30.648792   77047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:27:30.658806   77047 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0807 19:27:30.658868   77047 kubeadm.go:392] StartCluster: {Name:kindnet-853483 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:kindnet-853483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.166 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:27:30.658974   77047 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 19:27:30.659034   77047 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:27:30.711534   77047 cri.go:89] found id: ""
	I0807 19:27:30.711609   77047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0807 19:27:30.724026   77047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0807 19:27:30.734917   77047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0807 19:27:30.747397   77047 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0807 19:27:30.747416   77047 kubeadm.go:157] found existing configuration files:
	
	I0807 19:27:30.747455   77047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0807 19:27:30.763073   77047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0807 19:27:30.763133   77047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0807 19:27:30.777720   77047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0807 19:27:30.790766   77047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0807 19:27:30.790827   77047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0807 19:27:30.804610   77047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0807 19:27:30.817763   77047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0807 19:27:30.817829   77047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0807 19:27:30.829426   77047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0807 19:27:30.839165   77047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0807 19:27:30.839248   77047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0807 19:27:30.849236   77047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0807 19:27:31.070008   77047 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0807 19:27:31.076869   76375 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 19:27:31.076895   76375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0807 19:27:31.076922   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHHostname
	I0807 19:27:31.080785   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:27:31.081340   76375 main.go:141] libmachine: (auto-853483) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:c2:31", ip: ""} in network mk-auto-853483: {Iface:virbr4 ExpiryTime:2024-08-07 20:26:51 +0000 UTC Type:0 Mac:52:54:00:0e:c2:31 Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:auto-853483 Clientid:01:52:54:00:0e:c2:31}
	I0807 19:27:31.081364   76375 main.go:141] libmachine: (auto-853483) DBG | domain auto-853483 has defined IP address 192.168.72.13 and MAC address 52:54:00:0e:c2:31 in network mk-auto-853483
	I0807 19:27:31.081619   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHPort
	I0807 19:27:31.081824   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHKeyPath
	I0807 19:27:31.082078   76375 main.go:141] libmachine: (auto-853483) Calling .GetSSHUsername
	I0807 19:27:31.082247   76375 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/auto-853483/id_rsa Username:docker}
	I0807 19:27:31.117369   76375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0807 19:27:31.144195   76375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:27:31.144195   76375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0807 19:27:31.222376   76375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0807 19:27:32.478815   76375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.361400256s)
	I0807 19:27:32.478863   76375 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.334599974s)
	I0807 19:27:32.478873   76375 main.go:141] libmachine: Making call to close driver server
	I0807 19:27:32.478884   76375 main.go:141] libmachine: (auto-853483) Calling .Close
	I0807 19:27:32.479178   76375 main.go:141] libmachine: Successfully made call to close driver server
	I0807 19:27:32.479212   76375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 19:27:32.479222   76375 main.go:141] libmachine: Making call to close driver server
	I0807 19:27:32.479701   76375 main.go:141] libmachine: (auto-853483) Calling .Close
	I0807 19:27:32.479998   76375 main.go:141] libmachine: Successfully made call to close driver server
	I0807 19:27:32.480012   76375 main.go:141] libmachine: (auto-853483) DBG | Closing plugin on server side
	I0807 19:27:32.480024   76375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 19:27:32.480689   76375 node_ready.go:35] waiting up to 15m0s for node "auto-853483" to be "Ready" ...
	I0807 19:27:33.296171   76375 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.151856552s)
	I0807 19:27:33.296280   76375 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0807 19:27:33.304078   76375 node_ready.go:49] node "auto-853483" has status "Ready":"True"
	I0807 19:27:33.304105   76375 node_ready.go:38] duration metric: took 823.395874ms for node "auto-853483" to be "Ready" ...
	I0807 19:27:33.304116   76375 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0807 19:27:33.324994   76375 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-dg6xx" in "kube-system" namespace to be "Ready" ...
	I0807 19:27:33.451944   76375 main.go:141] libmachine: Making call to close driver server
	I0807 19:27:33.451966   76375 main.go:141] libmachine: (auto-853483) Calling .Close
	I0807 19:27:33.452259   76375 main.go:141] libmachine: Successfully made call to close driver server
	I0807 19:27:33.452282   76375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 19:27:33.614870   76375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.392422049s)
	I0807 19:27:33.614932   76375 main.go:141] libmachine: Making call to close driver server
	I0807 19:27:33.614944   76375 main.go:141] libmachine: (auto-853483) Calling .Close
	I0807 19:27:33.615291   76375 main.go:141] libmachine: Successfully made call to close driver server
	I0807 19:27:33.615309   76375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 19:27:33.615318   76375 main.go:141] libmachine: Making call to close driver server
	I0807 19:27:33.615327   76375 main.go:141] libmachine: (auto-853483) Calling .Close
	I0807 19:27:33.615337   76375 main.go:141] libmachine: (auto-853483) DBG | Closing plugin on server side
	I0807 19:27:33.615561   76375 main.go:141] libmachine: (auto-853483) DBG | Closing plugin on server side
	I0807 19:27:33.615586   76375 main.go:141] libmachine: Successfully made call to close driver server
	I0807 19:27:33.615597   76375 main.go:141] libmachine: Making call to close connection to plugin binary
	I0807 19:27:33.618471   76375 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0807 19:27:34.012173   77301 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 19:27:34.012219   77301 machine.go:97] duration metric: took 9.943655852s to provisionDockerMachine
	I0807 19:27:34.012234   77301 start.go:293] postStartSetup for "kubernetes-upgrade-235652" (driver="kvm2")
	I0807 19:27:34.012248   77301 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:27:34.012275   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:27:34.012625   77301 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:27:34.012653   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:34.015995   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.016508   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:34.016535   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.016860   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:27:34.017060   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:34.017251   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:27:34.017409   77301 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa Username:docker}
	I0807 19:27:34.119403   77301 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:27:34.125324   77301 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 19:27:34.125351   77301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 19:27:34.125426   77301 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 19:27:34.125519   77301 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 19:27:34.125656   77301 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:27:34.139915   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:27:34.178134   77301 start.go:296] duration metric: took 165.882196ms for postStartSetup
	I0807 19:27:34.178197   77301 fix.go:56] duration metric: took 10.136901592s for fixHost
	I0807 19:27:34.178225   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:34.181517   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.181979   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:34.182011   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.182201   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:27:34.182398   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:34.182620   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:34.182796   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:27:34.182996   77301 main.go:141] libmachine: Using SSH client type: native
	I0807 19:27:34.183206   77301 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0807 19:27:34.183219   77301 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0807 19:27:34.304331   77301 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723058854.295416969
	
	I0807 19:27:34.304355   77301 fix.go:216] guest clock: 1723058854.295416969
	I0807 19:27:34.304365   77301 fix.go:229] Guest: 2024-08-07 19:27:34.295416969 +0000 UTC Remote: 2024-08-07 19:27:34.178203153 +0000 UTC m=+35.818548814 (delta=117.213816ms)
	I0807 19:27:34.304407   77301 fix.go:200] guest clock delta is within tolerance: 117.213816ms
	I0807 19:27:34.304414   77301 start.go:83] releasing machines lock for "kubernetes-upgrade-235652", held for 10.263156779s
	I0807 19:27:34.304437   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:27:34.304712   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetIP
	I0807 19:27:34.307882   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.308387   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:34.308425   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.308693   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:27:34.309371   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:27:34.309561   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .DriverName
	I0807 19:27:34.309671   77301 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 19:27:34.309707   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:34.309781   77301 ssh_runner.go:195] Run: cat /version.json
	I0807 19:27:34.309796   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHHostname
	I0807 19:27:34.312780   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.312972   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.313224   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:34.313256   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.313283   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:34.313302   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:34.313493   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:27:34.313681   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHPort
	I0807 19:27:34.313715   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:34.313787   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHKeyPath
	I0807 19:27:34.313915   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:27:34.313924   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetSSHUsername
	I0807 19:27:34.314067   77301 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa Username:docker}
	I0807 19:27:34.314219   77301 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/kubernetes-upgrade-235652/id_rsa Username:docker}
	I0807 19:27:34.428776   77301 ssh_runner.go:195] Run: systemctl --version
	I0807 19:27:34.453425   77301 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 19:27:34.842179   77301 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 19:27:34.899755   77301 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 19:27:34.899848   77301 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:27:34.994512   77301 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 19:27:34.994540   77301 start.go:495] detecting cgroup driver to use...
	I0807 19:27:34.994618   77301 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 19:27:35.075074   77301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:27:35.135268   77301 docker.go:217] disabling cri-docker service (if available) ...
	I0807 19:27:35.135339   77301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 19:27:35.168698   77301 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 19:27:35.195356   77301 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 19:27:35.510980   77301 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 19:27:35.931612   77301 docker.go:233] disabling docker service ...
	I0807 19:27:35.931681   77301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 19:27:36.024089   77301 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 19:27:36.039650   77301 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 19:27:36.255781   77301 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 19:27:36.508025   77301 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 19:27:36.577382   77301 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:27:36.638738   77301 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0807 19:27:36.638832   77301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:36.681589   77301 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 19:27:36.681669   77301 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:36.750237   77301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:36.767872   77301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:36.784377   77301 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:27:36.803887   77301 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:36.822691   77301 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:36.842945   77301 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:27:36.858093   77301 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:27:36.873018   77301 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:27:36.891981   77301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:27:37.133753   77301 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 19:27:37.842143   77301 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 19:27:37.842216   77301 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 19:27:37.847503   77301 start.go:563] Will wait 60s for crictl version
	I0807 19:27:37.847560   77301 ssh_runner.go:195] Run: which crictl
	I0807 19:27:37.851874   77301 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:27:37.890267   77301 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 19:27:37.890359   77301 ssh_runner.go:195] Run: crio --version
	I0807 19:27:37.926553   77301 ssh_runner.go:195] Run: crio --version
	I0807 19:27:37.961825   77301 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0807 19:27:37.963338   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) Calling .GetIP
	I0807 19:27:37.966508   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:37.967019   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:38:b8", ip: ""} in network mk-kubernetes-upgrade-235652: {Iface:virbr2 ExpiryTime:2024-08-07 20:26:27 +0000 UTC Type:0 Mac:52:54:00:24:38:b8 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-235652 Clientid:01:52:54:00:24:38:b8}
	I0807 19:27:37.967066   77301 main.go:141] libmachine: (kubernetes-upgrade-235652) DBG | domain kubernetes-upgrade-235652 has defined IP address 192.168.50.208 and MAC address 52:54:00:24:38:b8 in network mk-kubernetes-upgrade-235652
	I0807 19:27:37.967308   77301 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0807 19:27:37.973135   77301 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-235652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-235652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:27:37.973279   77301 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0807 19:27:37.973350   77301 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:27:38.030795   77301 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:27:38.030827   77301 crio.go:433] Images already preloaded, skipping extraction
	I0807 19:27:38.030889   77301 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:27:38.067816   77301 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:27:38.067843   77301 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:27:38.067854   77301 kubeadm.go:934] updating node { 192.168.50.208 8443 v1.31.0-rc.0 crio true true} ...
	I0807 19:27:38.068006   77301 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-235652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-235652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:27:38.068092   77301 ssh_runner.go:195] Run: crio config
	I0807 19:27:38.119711   77301 cni.go:84] Creating CNI manager for ""
	I0807 19:27:38.119730   77301 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:27:38.119741   77301 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:27:38.119761   77301 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.208 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-235652 NodeName:kubernetes-upgrade-235652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 19:27:38.119930   77301 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-235652"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:27:38.120001   77301 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0807 19:27:38.131398   77301 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:27:38.131475   77301 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:27:38.142250   77301 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (330 bytes)
	I0807 19:27:38.161178   77301 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0807 19:27:38.180552   77301 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0807 19:27:38.204755   77301 ssh_runner.go:195] Run: grep 192.168.50.208	control-plane.minikube.internal$ /etc/hosts
	I0807 19:27:38.209815   77301 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:27:38.386117   77301 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:27:33.620125   76375 addons.go:510] duration metric: took 2.878644094s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0807 19:27:33.805525   76375 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-853483" context rescaled to 1 replicas
	I0807 19:27:35.329680   76375 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-dg6xx" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-dg6xx" not found
	I0807 19:27:35.329766   76375 pod_ready.go:81] duration metric: took 2.004734351s for pod "coredns-7db6d8ff4d-dg6xx" in "kube-system" namespace to be "Ready" ...
	E0807 19:27:35.329783   76375 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-dg6xx" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-dg6xx" not found
	I0807 19:27:35.329792   76375 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-xwt97" in "kube-system" namespace to be "Ready" ...
	I0807 19:27:37.338531   76375 pod_ready.go:102] pod "coredns-7db6d8ff4d-xwt97" in "kube-system" namespace has status "Ready":"False"
	I0807 19:27:41.394703   77047 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0807 19:27:41.394786   77047 kubeadm.go:310] [preflight] Running pre-flight checks
	I0807 19:27:41.394877   77047 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0807 19:27:41.395043   77047 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0807 19:27:41.395202   77047 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0807 19:27:41.395326   77047 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0807 19:27:41.397149   77047 out.go:204]   - Generating certificates and keys ...
	I0807 19:27:41.397266   77047 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0807 19:27:41.397349   77047 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0807 19:27:41.397466   77047 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0807 19:27:41.397542   77047 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0807 19:27:41.397630   77047 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0807 19:27:41.397720   77047 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0807 19:27:41.397805   77047 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0807 19:27:41.397972   77047 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-853483 localhost] and IPs [192.168.61.166 127.0.0.1 ::1]
	I0807 19:27:41.398053   77047 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0807 19:27:41.398237   77047 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-853483 localhost] and IPs [192.168.61.166 127.0.0.1 ::1]
	I0807 19:27:41.398344   77047 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0807 19:27:41.398430   77047 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0807 19:27:41.398493   77047 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0807 19:27:41.398570   77047 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0807 19:27:41.398657   77047 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0807 19:27:41.398746   77047 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0807 19:27:41.398830   77047 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0807 19:27:41.398933   77047 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0807 19:27:41.399032   77047 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0807 19:27:41.399158   77047 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0807 19:27:41.399248   77047 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0807 19:27:41.400987   77047 out.go:204]   - Booting up control plane ...
	I0807 19:27:41.401104   77047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0807 19:27:41.401219   77047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0807 19:27:41.401323   77047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0807 19:27:41.401453   77047 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0807 19:27:41.401569   77047 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0807 19:27:41.401628   77047 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0807 19:27:41.401839   77047 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0807 19:27:41.401947   77047 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0807 19:27:41.402045   77047 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.934393ms
	I0807 19:27:41.402150   77047 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0807 19:27:41.402229   77047 kubeadm.go:310] [api-check] The API server is healthy after 5.502291558s
	I0807 19:27:41.402373   77047 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0807 19:27:41.402544   77047 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0807 19:27:41.402630   77047 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0807 19:27:41.402869   77047 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-853483 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0807 19:27:41.402973   77047 kubeadm.go:310] [bootstrap-token] Using token: f4lz5a.8cq65uf41897593b
	I0807 19:27:41.404498   77047 out.go:204]   - Configuring RBAC rules ...
	I0807 19:27:41.404629   77047 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0807 19:27:41.404740   77047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0807 19:27:41.404886   77047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0807 19:27:41.404988   77047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0807 19:27:41.405111   77047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0807 19:27:41.405205   77047 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0807 19:27:41.405354   77047 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0807 19:27:41.405397   77047 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0807 19:27:41.405436   77047 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0807 19:27:41.405442   77047 kubeadm.go:310] 
	I0807 19:27:41.405489   77047 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0807 19:27:41.405495   77047 kubeadm.go:310] 
	I0807 19:27:41.405556   77047 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0807 19:27:41.405562   77047 kubeadm.go:310] 
	I0807 19:27:41.405596   77047 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0807 19:27:41.405650   77047 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0807 19:27:41.405693   77047 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0807 19:27:41.405698   77047 kubeadm.go:310] 
	I0807 19:27:41.405741   77047 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0807 19:27:41.405746   77047 kubeadm.go:310] 
	I0807 19:27:41.405784   77047 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0807 19:27:41.405790   77047 kubeadm.go:310] 
	I0807 19:27:41.405855   77047 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0807 19:27:41.405919   77047 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0807 19:27:41.405975   77047 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0807 19:27:41.405980   77047 kubeadm.go:310] 
	I0807 19:27:41.406048   77047 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0807 19:27:41.406113   77047 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0807 19:27:41.406119   77047 kubeadm.go:310] 
	I0807 19:27:41.406185   77047 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f4lz5a.8cq65uf41897593b \
	I0807 19:27:41.406268   77047 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 \
	I0807 19:27:41.406287   77047 kubeadm.go:310] 	--control-plane 
	I0807 19:27:41.406293   77047 kubeadm.go:310] 
	I0807 19:27:41.406360   77047 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0807 19:27:41.406365   77047 kubeadm.go:310] 
	I0807 19:27:41.406430   77047 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f4lz5a.8cq65uf41897593b \
	I0807 19:27:41.406522   77047 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:253c980a4c9122831b91d44000373c8d68b6d1a783eb0196691a7459bf1d3ac7 
	I0807 19:27:41.406532   77047 cni.go:84] Creating CNI manager for "kindnet"
	I0807 19:27:41.408237   77047 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0807 19:27:41.409751   77047 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0807 19:27:41.415643   77047 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0807 19:27:41.415661   77047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0807 19:27:41.438986   77047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0807 19:27:41.775146   77047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0807 19:27:41.775236   77047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:41.775341   77047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-853483 minikube.k8s.io/updated_at=2024_08_07T19_27_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e minikube.k8s.io/name=kindnet-853483 minikube.k8s.io/primary=true
	I0807 19:27:41.847548   77047 ops.go:34] apiserver oom_adj: -16
	I0807 19:27:41.986024   77047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:42.486261   77047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0807 19:27:38.406295   77301 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652 for IP: 192.168.50.208
	I0807 19:27:38.406322   77301 certs.go:194] generating shared ca certs ...
	I0807 19:27:38.406343   77301 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:27:38.406520   77301 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 19:27:38.406576   77301 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 19:27:38.406590   77301 certs.go:256] generating profile certs ...
	I0807 19:27:38.406695   77301 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/client.key
	I0807 19:27:38.406738   77301 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.key.baace47c
	I0807 19:27:38.406774   77301 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.key
	I0807 19:27:38.406887   77301 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 19:27:38.406914   77301 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 19:27:38.406922   77301 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 19:27:38.406943   77301 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:27:38.406968   77301 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:27:38.406989   77301 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 19:27:38.407024   77301 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:27:38.407671   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:27:38.436651   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:27:38.470393   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:27:38.499917   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 19:27:38.531519   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0807 19:27:38.565173   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 19:27:38.596714   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:27:38.625762   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/kubernetes-upgrade-235652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0807 19:27:38.655905   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:27:38.687802   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 19:27:38.748830   77301 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 19:27:38.885364   77301 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:27:38.915257   77301 ssh_runner.go:195] Run: openssl version
	I0807 19:27:38.934924   77301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:27:39.095197   77301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:27:39.147996   77301 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:27:39.148070   77301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:27:39.203166   77301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:27:39.258608   77301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 19:27:39.345863   77301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 19:27:39.377642   77301 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:27:39.377712   77301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 19:27:39.398307   77301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 19:27:39.418031   77301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 19:27:39.438274   77301 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 19:27:39.447898   77301 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:27:39.447969   77301 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 19:27:39.459063   77301 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:27:39.474745   77301 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:27:39.483355   77301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 19:27:39.490126   77301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 19:27:39.505754   77301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 19:27:39.519501   77301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 19:27:39.534894   77301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 19:27:39.547343   77301 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 19:27:39.575484   77301 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-235652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-235652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:27:39.575607   77301 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 19:27:39.575682   77301 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:27:39.667136   77301 cri.go:89] found id: "50a1f4a78d82dc0fcb2639480a5a7bf0b4efdf8a0b6d85fdcfae9b4ecd08883d"
	I0807 19:27:39.667158   77301 cri.go:89] found id: "97447711830a410f35e6f784a446a2493096d223394c782fa94912b6ece7fdaf"
	I0807 19:27:39.667164   77301 cri.go:89] found id: "55d668d387dd027b7ff45c66dc06b6f48e8ce0f12c7b390db4a8ec6dcc6bc8a7"
	I0807 19:27:39.667169   77301 cri.go:89] found id: "4d525a4fd11daa89088046ae42f1736bf2c98d9f88b379734a4a10e0d73f9db0"
	I0807 19:27:39.667173   77301 cri.go:89] found id: "eaa9d7f9f3e33b3641fdd49a9afdd3d0f827012084e1aa00147da156b6f4664e"
	I0807 19:27:39.667185   77301 cri.go:89] found id: "5f48044ef1c8df198d6a42d502b07f24ddeeabcc21d041372654bd74bcdd2076"
	I0807 19:27:39.667189   77301 cri.go:89] found id: "eed873fbca8f6faa5a7e9050b686bce55416b72a5810d123854f61a852180041"
	I0807 19:27:39.667193   77301 cri.go:89] found id: "e1ff41553dae6597e49b6d94bec2fbeac439632bbfcc6d691ba467ccc4f4d2ff"
	I0807 19:27:39.667197   77301 cri.go:89] found id: ""
	I0807 19:27:39.667247   77301 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.243525744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058869243497359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d644eb2-5074-4076-9cfe-bf4eeb22e709 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.244581093Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=799dd8a7-9870-4883-b8fb-cbb94ee036e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.244664255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=799dd8a7-9870-4883-b8fb-cbb94ee036e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.245079854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af9ebd0355682142e47b39c34401845c400bad845e48023d840cd1b175f0a399,PodSandboxId:e99bfff37622a18d919fd4b2e2109510878ddb7efee8edbabdd2d5c6485b4c80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:1723058866016691718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzs6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1aa5ab6-7255-494a-a4f5-613b9296f1d8,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4e0dd0ad27206b26abac12cbcc7dc22d8e38f2b91b7585ffa352002752a57b,PodSandboxId:2ee7dc8bdb274c77ebe1d803c945045fae91a09a0d17e06339606c5907ff12f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058866049807614,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wq5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa459c9-017a-4b1b-b843-7f198cb81688,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6595539c4c15020032f86c913baa82c621dd2f594088552f248743089431c562,PodSandboxId:3d6b0ddde652b5ad05583c5d6a1e65a4ff6ebcf74aa6a6d7f635547164755c17,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723058865985334986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee738ef3-5e02-4f9b-a52f-1ac7c67aad38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c007b4203a77d863613c0cec309dfd4be3b9e00d8d6b1045ad8da62e6c3f76d,PodSandboxId:7b02d0c65dfa2b63910813e76dbd7aef3651a5b9711a114ab5539212d72ffdb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058865976141417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j5hvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7464c5ed-75a1-40f9-a974-94
0f2dcae1c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56857e18d0f78ae74825442a84c662401587f4f10090f3ebf10e87c494fde25,PodSandboxId:c4137495d50dfefb913bd3fdfb2073267c0a14ab6d8a52ace7c2e630eccde97a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723058862193544268,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436b742f871e20901680926876f2a21a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a9381b88349febdca21c5afe47140a7a9f5de2e5692dc2e794538f8d05ce9ff,PodSandboxId:c64ee4bf7e7df9e3a4bce613d9090e5135643d2acd57ed5a373b1e254d4b1c53,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723058862185972
730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ee7a207ab00a37242c20690382b9f4,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9b30801bd00211e2c024a64f8894c3cb1c10f4bac86eecbbf6e4bdcbe60078,PodSandboxId:20f8657aa1b092e16ef2230e596cd6e9ef04e185087f07de15bc4b088269d189,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:172
3058862176898504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a982c7a52a46d8bd9ccc37a28521f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1ddcfba9261ab6533ba1f0c5290e4f4345a42f8679ffc54037977423d03821,PodSandboxId:0271648c99d26fd4e19fdef061c1e4f781ed10d6f526c8611bf8172df761d3d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172305886216354366
0,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8112723b2098e615cf601db610db0215,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a1f4a78d82dc0fcb2639480a5a7bf0b4efdf8a0b6d85fdcfae9b4ecd08883d,PodSandboxId:238365572cf7e0b62cfb20be2d943b413942015f65da727517c5b01f3b1bbd8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058856559534643,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wq5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa459c9-017a-4b1b-b843-7f198cb81688,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97447711830a410f35e6f784a446a2493096d223394c782fa94912b6ece7fdaf,PodSandboxId:146486a04183ea4d0bec647ce4665ca4ad877c7fc6c2de3f0684bdcfc3915b5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058856423230141,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j5hvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7464c5ed-75a1-40f9-a974-940f2dcae1c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d525a4fd11daa89088046ae42f1736bf2c98d9f88b379734a4a10e0d73f9db0,PodSandboxId:2f74e3ac5b0d219afb52d455e4da1ab3228304f5646987
98cf64cfa3a8ce6eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723058855212808689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee738ef3-5e02-4f9b-a52f-1ac7c67aad38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d668d387dd027b7ff45c66dc06b6f48e8ce0f12c7b390db4a8ec6dcc6bc8a7,PodSandboxId:9e02249f8e1a0564575ada71268be1719b021b315f76b7e8bfcbf1ba0a0
b5c70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1723058855310992929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ee7a207ab00a37242c20690382b9f4,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa9d7f9f3e33b3641fdd49a9afdd3d0f827012084e1aa00147da156b6f4664e,PodSandboxId:1db61fa4373d1d77c4c7945df40abf74ab3c3b
8d718f98b14cae0c84fba8a327,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723058855192715772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a982c7a52a46d8bd9ccc37a28521f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f48044ef1c8df198d6a42d502b07f24ddeeabcc21d041372654bd74bcdd2076,PodSandboxId:17d0cddb41c6b2f6346c4d8897cdbb85c0dd5c827e8b
0170ea00dcf5eb7be6fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1723058855185655181,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436b742f871e20901680926876f2a21a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed873fbca8f6faa5a7e9050b686bce55416b72a5810d123854f61a852180041,PodSandboxId:dc34abcfb292f328d262a930fe1bb8158ece79f7c2e3ab2201
3990b2970a9245,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723058854966642877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8112723b2098e615cf601db610db0215,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ff41553dae6597e49b6d94bec2fbeac439632bbfcc6d691ba467ccc4f4d2ff,PodSandboxId:5e6c1770b60139b886e63011a422fc90cb9d75b12dd7e235e783550c4298a895,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1723058854909904695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzs6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1aa5ab6-7255-494a-a4f5-613b9296f1d8,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=799dd8a7-9870-4883-b8fb-cbb94ee036e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.291893269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f34d7deb-2845-43ec-926f-236345accf8d name=/runtime.v1.RuntimeService/Version
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.292047659Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f34d7deb-2845-43ec-926f-236345accf8d name=/runtime.v1.RuntimeService/Version
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.293385555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a14a19d5-4fb8-4a7a-bfef-179e493caa86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.293768285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058869293741145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a14a19d5-4fb8-4a7a-bfef-179e493caa86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.294556418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3841c88f-8547-4d87-b5d4-32e71c118986 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.294633097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3841c88f-8547-4d87-b5d4-32e71c118986 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.295167378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af9ebd0355682142e47b39c34401845c400bad845e48023d840cd1b175f0a399,PodSandboxId:e99bfff37622a18d919fd4b2e2109510878ddb7efee8edbabdd2d5c6485b4c80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:1723058866016691718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzs6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1aa5ab6-7255-494a-a4f5-613b9296f1d8,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4e0dd0ad27206b26abac12cbcc7dc22d8e38f2b91b7585ffa352002752a57b,PodSandboxId:2ee7dc8bdb274c77ebe1d803c945045fae91a09a0d17e06339606c5907ff12f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058866049807614,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wq5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa459c9-017a-4b1b-b843-7f198cb81688,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6595539c4c15020032f86c913baa82c621dd2f594088552f248743089431c562,PodSandboxId:3d6b0ddde652b5ad05583c5d6a1e65a4ff6ebcf74aa6a6d7f635547164755c17,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723058865985334986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee738ef3-5e02-4f9b-a52f-1ac7c67aad38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c007b4203a77d863613c0cec309dfd4be3b9e00d8d6b1045ad8da62e6c3f76d,PodSandboxId:7b02d0c65dfa2b63910813e76dbd7aef3651a5b9711a114ab5539212d72ffdb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058865976141417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j5hvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7464c5ed-75a1-40f9-a974-94
0f2dcae1c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56857e18d0f78ae74825442a84c662401587f4f10090f3ebf10e87c494fde25,PodSandboxId:c4137495d50dfefb913bd3fdfb2073267c0a14ab6d8a52ace7c2e630eccde97a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723058862193544268,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436b742f871e20901680926876f2a21a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a9381b88349febdca21c5afe47140a7a9f5de2e5692dc2e794538f8d05ce9ff,PodSandboxId:c64ee4bf7e7df9e3a4bce613d9090e5135643d2acd57ed5a373b1e254d4b1c53,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723058862185972
730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ee7a207ab00a37242c20690382b9f4,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9b30801bd00211e2c024a64f8894c3cb1c10f4bac86eecbbf6e4bdcbe60078,PodSandboxId:20f8657aa1b092e16ef2230e596cd6e9ef04e185087f07de15bc4b088269d189,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:172
3058862176898504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a982c7a52a46d8bd9ccc37a28521f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1ddcfba9261ab6533ba1f0c5290e4f4345a42f8679ffc54037977423d03821,PodSandboxId:0271648c99d26fd4e19fdef061c1e4f781ed10d6f526c8611bf8172df761d3d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172305886216354366
0,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8112723b2098e615cf601db610db0215,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a1f4a78d82dc0fcb2639480a5a7bf0b4efdf8a0b6d85fdcfae9b4ecd08883d,PodSandboxId:238365572cf7e0b62cfb20be2d943b413942015f65da727517c5b01f3b1bbd8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058856559534643,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wq5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa459c9-017a-4b1b-b843-7f198cb81688,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97447711830a410f35e6f784a446a2493096d223394c782fa94912b6ece7fdaf,PodSandboxId:146486a04183ea4d0bec647ce4665ca4ad877c7fc6c2de3f0684bdcfc3915b5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058856423230141,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j5hvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7464c5ed-75a1-40f9-a974-940f2dcae1c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d525a4fd11daa89088046ae42f1736bf2c98d9f88b379734a4a10e0d73f9db0,PodSandboxId:2f74e3ac5b0d219afb52d455e4da1ab3228304f5646987
98cf64cfa3a8ce6eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723058855212808689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee738ef3-5e02-4f9b-a52f-1ac7c67aad38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d668d387dd027b7ff45c66dc06b6f48e8ce0f12c7b390db4a8ec6dcc6bc8a7,PodSandboxId:9e02249f8e1a0564575ada71268be1719b021b315f76b7e8bfcbf1ba0a0
b5c70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1723058855310992929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ee7a207ab00a37242c20690382b9f4,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa9d7f9f3e33b3641fdd49a9afdd3d0f827012084e1aa00147da156b6f4664e,PodSandboxId:1db61fa4373d1d77c4c7945df40abf74ab3c3b
8d718f98b14cae0c84fba8a327,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723058855192715772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a982c7a52a46d8bd9ccc37a28521f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f48044ef1c8df198d6a42d502b07f24ddeeabcc21d041372654bd74bcdd2076,PodSandboxId:17d0cddb41c6b2f6346c4d8897cdbb85c0dd5c827e8b
0170ea00dcf5eb7be6fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1723058855185655181,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436b742f871e20901680926876f2a21a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed873fbca8f6faa5a7e9050b686bce55416b72a5810d123854f61a852180041,PodSandboxId:dc34abcfb292f328d262a930fe1bb8158ece79f7c2e3ab2201
3990b2970a9245,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723058854966642877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8112723b2098e615cf601db610db0215,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ff41553dae6597e49b6d94bec2fbeac439632bbfcc6d691ba467ccc4f4d2ff,PodSandboxId:5e6c1770b60139b886e63011a422fc90cb9d75b12dd7e235e783550c4298a895,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1723058854909904695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzs6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1aa5ab6-7255-494a-a4f5-613b9296f1d8,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3841c88f-8547-4d87-b5d4-32e71c118986 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.347192187Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b499576f-b9bd-4175-8f44-3df6d66c4cf6 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.347286496Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b499576f-b9bd-4175-8f44-3df6d66c4cf6 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.348263478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0857762-f3a2-43e3-8573-8e2b92dc1e0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.349372065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058869349346805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0857762-f3a2-43e3-8573-8e2b92dc1e0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.350109847Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fda3f3f-29cb-4012-b7af-40d5ba412ca2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.350174444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fda3f3f-29cb-4012-b7af-40d5ba412ca2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.350814854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af9ebd0355682142e47b39c34401845c400bad845e48023d840cd1b175f0a399,PodSandboxId:e99bfff37622a18d919fd4b2e2109510878ddb7efee8edbabdd2d5c6485b4c80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:1723058866016691718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzs6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1aa5ab6-7255-494a-a4f5-613b9296f1d8,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4e0dd0ad27206b26abac12cbcc7dc22d8e38f2b91b7585ffa352002752a57b,PodSandboxId:2ee7dc8bdb274c77ebe1d803c945045fae91a09a0d17e06339606c5907ff12f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058866049807614,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wq5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa459c9-017a-4b1b-b843-7f198cb81688,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6595539c4c15020032f86c913baa82c621dd2f594088552f248743089431c562,PodSandboxId:3d6b0ddde652b5ad05583c5d6a1e65a4ff6ebcf74aa6a6d7f635547164755c17,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723058865985334986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee738ef3-5e02-4f9b-a52f-1ac7c67aad38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c007b4203a77d863613c0cec309dfd4be3b9e00d8d6b1045ad8da62e6c3f76d,PodSandboxId:7b02d0c65dfa2b63910813e76dbd7aef3651a5b9711a114ab5539212d72ffdb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058865976141417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j5hvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7464c5ed-75a1-40f9-a974-94
0f2dcae1c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56857e18d0f78ae74825442a84c662401587f4f10090f3ebf10e87c494fde25,PodSandboxId:c4137495d50dfefb913bd3fdfb2073267c0a14ab6d8a52ace7c2e630eccde97a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723058862193544268,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436b742f871e20901680926876f2a21a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a9381b88349febdca21c5afe47140a7a9f5de2e5692dc2e794538f8d05ce9ff,PodSandboxId:c64ee4bf7e7df9e3a4bce613d9090e5135643d2acd57ed5a373b1e254d4b1c53,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723058862185972
730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ee7a207ab00a37242c20690382b9f4,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9b30801bd00211e2c024a64f8894c3cb1c10f4bac86eecbbf6e4bdcbe60078,PodSandboxId:20f8657aa1b092e16ef2230e596cd6e9ef04e185087f07de15bc4b088269d189,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:172
3058862176898504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a982c7a52a46d8bd9ccc37a28521f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1ddcfba9261ab6533ba1f0c5290e4f4345a42f8679ffc54037977423d03821,PodSandboxId:0271648c99d26fd4e19fdef061c1e4f781ed10d6f526c8611bf8172df761d3d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172305886216354366
0,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8112723b2098e615cf601db610db0215,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a1f4a78d82dc0fcb2639480a5a7bf0b4efdf8a0b6d85fdcfae9b4ecd08883d,PodSandboxId:238365572cf7e0b62cfb20be2d943b413942015f65da727517c5b01f3b1bbd8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058856559534643,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wq5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa459c9-017a-4b1b-b843-7f198cb81688,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97447711830a410f35e6f784a446a2493096d223394c782fa94912b6ece7fdaf,PodSandboxId:146486a04183ea4d0bec647ce4665ca4ad877c7fc6c2de3f0684bdcfc3915b5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058856423230141,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j5hvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7464c5ed-75a1-40f9-a974-940f2dcae1c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d525a4fd11daa89088046ae42f1736bf2c98d9f88b379734a4a10e0d73f9db0,PodSandboxId:2f74e3ac5b0d219afb52d455e4da1ab3228304f5646987
98cf64cfa3a8ce6eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723058855212808689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee738ef3-5e02-4f9b-a52f-1ac7c67aad38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d668d387dd027b7ff45c66dc06b6f48e8ce0f12c7b390db4a8ec6dcc6bc8a7,PodSandboxId:9e02249f8e1a0564575ada71268be1719b021b315f76b7e8bfcbf1ba0a0
b5c70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1723058855310992929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ee7a207ab00a37242c20690382b9f4,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa9d7f9f3e33b3641fdd49a9afdd3d0f827012084e1aa00147da156b6f4664e,PodSandboxId:1db61fa4373d1d77c4c7945df40abf74ab3c3b
8d718f98b14cae0c84fba8a327,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723058855192715772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a982c7a52a46d8bd9ccc37a28521f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f48044ef1c8df198d6a42d502b07f24ddeeabcc21d041372654bd74bcdd2076,PodSandboxId:17d0cddb41c6b2f6346c4d8897cdbb85c0dd5c827e8b
0170ea00dcf5eb7be6fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1723058855185655181,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436b742f871e20901680926876f2a21a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed873fbca8f6faa5a7e9050b686bce55416b72a5810d123854f61a852180041,PodSandboxId:dc34abcfb292f328d262a930fe1bb8158ece79f7c2e3ab2201
3990b2970a9245,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723058854966642877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8112723b2098e615cf601db610db0215,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ff41553dae6597e49b6d94bec2fbeac439632bbfcc6d691ba467ccc4f4d2ff,PodSandboxId:5e6c1770b60139b886e63011a422fc90cb9d75b12dd7e235e783550c4298a895,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1723058854909904695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzs6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1aa5ab6-7255-494a-a4f5-613b9296f1d8,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fda3f3f-29cb-4012-b7af-40d5ba412ca2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.388781252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=224570c7-26c4-4628-979a-c5964094daa6 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.388855859Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=224570c7-26c4-4628-979a-c5964094daa6 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.390206959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28fe8908-f2cc-4cdb-b386-a399f4c6e2da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.390596164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058869390572374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28fe8908-f2cc-4cdb-b386-a399f4c6e2da name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.391284902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa67b4f8-dced-41f4-8c73-81726739e892 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.391341773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa67b4f8-dced-41f4-8c73-81726739e892 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:27:49 kubernetes-upgrade-235652 crio[3033]: time="2024-08-07 19:27:49.391718044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:af9ebd0355682142e47b39c34401845c400bad845e48023d840cd1b175f0a399,PodSandboxId:e99bfff37622a18d919fd4b2e2109510878ddb7efee8edbabdd2d5c6485b4c80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,CreatedAt:1723058866016691718,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzs6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1aa5ab6-7255-494a-a4f5-613b9296f1d8,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d4e0dd0ad27206b26abac12cbcc7dc22d8e38f2b91b7585ffa352002752a57b,PodSandboxId:2ee7dc8bdb274c77ebe1d803c945045fae91a09a0d17e06339606c5907ff12f8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058866049807614,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wq5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa459c9-017a-4b1b-b843-7f198cb81688,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6595539c4c15020032f86c913baa82c621dd2f594088552f248743089431c562,PodSandboxId:3d6b0ddde652b5ad05583c5d6a1e65a4ff6ebcf74aa6a6d7f635547164755c17,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723058865985334986,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: ee738ef3-5e02-4f9b-a52f-1ac7c67aad38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c007b4203a77d863613c0cec309dfd4be3b9e00d8d6b1045ad8da62e6c3f76d,PodSandboxId:7b02d0c65dfa2b63910813e76dbd7aef3651a5b9711a114ab5539212d72ffdb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058865976141417,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j5hvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7464c5ed-75a1-40f9-a974-94
0f2dcae1c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56857e18d0f78ae74825442a84c662401587f4f10090f3ebf10e87c494fde25,PodSandboxId:c4137495d50dfefb913bd3fdfb2073267c0a14ab6d8a52ace7c2e630eccde97a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1723058862193544268,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436b742f871e20901680926876f2a21a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a9381b88349febdca21c5afe47140a7a9f5de2e5692dc2e794538f8d05ce9ff,PodSandboxId:c64ee4bf7e7df9e3a4bce613d9090e5135643d2acd57ed5a373b1e254d4b1c53,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1723058862185972
730,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ee7a207ab00a37242c20690382b9f4,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb9b30801bd00211e2c024a64f8894c3cb1c10f4bac86eecbbf6e4bdcbe60078,PodSandboxId:20f8657aa1b092e16ef2230e596cd6e9ef04e185087f07de15bc4b088269d189,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:172
3058862176898504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a982c7a52a46d8bd9ccc37a28521f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1ddcfba9261ab6533ba1f0c5290e4f4345a42f8679ffc54037977423d03821,PodSandboxId:0271648c99d26fd4e19fdef061c1e4f781ed10d6f526c8611bf8172df761d3d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:172305886216354366
0,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8112723b2098e615cf601db610db0215,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a1f4a78d82dc0fcb2639480a5a7bf0b4efdf8a0b6d85fdcfae9b4ecd08883d,PodSandboxId:238365572cf7e0b62cfb20be2d943b413942015f65da727517c5b01f3b1bbd8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058856559534643,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wq5wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efa459c9-017a-4b1b-b843-7f198cb81688,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97447711830a410f35e6f784a446a2493096d223394c782fa94912b6ece7fdaf,PodSandboxId:146486a04183ea4d0bec647ce4665ca4ad877c7fc6c2de3f0684bdcfc3915b5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058856423230141,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-j5hvx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7464c5ed-75a1-40f9-a974-940f2dcae1c9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d525a4fd11daa89088046ae42f1736bf2c98d9f88b379734a4a10e0d73f9db0,PodSandboxId:2f74e3ac5b0d219afb52d455e4da1ab3228304f5646987
98cf64cfa3a8ce6eee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723058855212808689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee738ef3-5e02-4f9b-a52f-1ac7c67aad38,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55d668d387dd027b7ff45c66dc06b6f48e8ce0f12c7b390db4a8ec6dcc6bc8a7,PodSandboxId:9e02249f8e1a0564575ada71268be1719b021b315f76b7e8bfcbf1ba0a0
b5c70,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1723058855310992929,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ee7a207ab00a37242c20690382b9f4,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eaa9d7f9f3e33b3641fdd49a9afdd3d0f827012084e1aa00147da156b6f4664e,PodSandboxId:1db61fa4373d1d77c4c7945df40abf74ab3c3b
8d718f98b14cae0c84fba8a327,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1723058855192715772,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a982c7a52a46d8bd9ccc37a28521f3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f48044ef1c8df198d6a42d502b07f24ddeeabcc21d041372654bd74bcdd2076,PodSandboxId:17d0cddb41c6b2f6346c4d8897cdbb85c0dd5c827e8b
0170ea00dcf5eb7be6fd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1723058855185655181,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 436b742f871e20901680926876f2a21a,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eed873fbca8f6faa5a7e9050b686bce55416b72a5810d123854f61a852180041,PodSandboxId:dc34abcfb292f328d262a930fe1bb8158ece79f7c2e3ab2201
3990b2970a9245,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723058854966642877,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-235652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8112723b2098e615cf601db610db0215,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1ff41553dae6597e49b6d94bec2fbeac439632bbfcc6d691ba467ccc4f4d2ff,PodSandboxId:5e6c1770b60139b886e63011a422fc90cb9d75b12dd7e235e783550c4298a895,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1723058854909904695,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzs6j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1aa5ab6-7255-494a-a4f5-613b9296f1d8,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa67b4f8-dced-41f4-8c73-81726739e892 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1d4e0dd0ad272       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   2ee7dc8bdb274       coredns-6f6b679f8f-wq5wg
	af9ebd0355682       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   3 seconds ago       Running             kube-proxy                2                   e99bfff37622a       kube-proxy-zzs6j
	6595539c4c150       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   3d6b0ddde652b       storage-provisioner
	8c007b4203a77       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   7b02d0c65dfa2       coredns-6f6b679f8f-j5hvx
	e56857e18d0f7       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   7 seconds ago       Running             kube-scheduler            2                   c4137495d50df       kube-scheduler-kubernetes-upgrade-235652
	0a9381b88349f       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   7 seconds ago       Running             kube-controller-manager   2                   c64ee4bf7e7df       kube-controller-manager-kubernetes-upgrade-235652
	bb9b30801bd00       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   7 seconds ago       Running             kube-apiserver            2                   20f8657aa1b09       kube-apiserver-kubernetes-upgrade-235652
	5d1ddcfba9261       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   0271648c99d26       etcd-kubernetes-upgrade-235652
	50a1f4a78d82d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 seconds ago      Exited              coredns                   1                   238365572cf7e       coredns-6f6b679f8f-wq5wg
	97447711830a4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 seconds ago      Exited              coredns                   1                   146486a04183e       coredns-6f6b679f8f-j5hvx
	55d668d387dd0       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   14 seconds ago      Exited              kube-controller-manager   1                   9e02249f8e1a0       kube-controller-manager-kubernetes-upgrade-235652
	4d525a4fd11da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       1                   2f74e3ac5b0d2       storage-provisioner
	eaa9d7f9f3e33       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   14 seconds ago      Exited              kube-apiserver            1                   1db61fa4373d1       kube-apiserver-kubernetes-upgrade-235652
	5f48044ef1c8d       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   14 seconds ago      Exited              kube-scheduler            1                   17d0cddb41c6b       kube-scheduler-kubernetes-upgrade-235652
	eed873fbca8f6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 seconds ago      Exited              etcd                      1                   dc34abcfb292f       etcd-kubernetes-upgrade-235652
	e1ff41553dae6       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   14 seconds ago      Exited              kube-proxy                1                   5e6c1770b6013       kube-proxy-zzs6j
	
	
	==> coredns [1d4e0dd0ad27206b26abac12cbcc7dc22d8e38f2b91b7585ffa352002752a57b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [50a1f4a78d82dc0fcb2639480a5a7bf0b4efdf8a0b6d85fdcfae9b4ecd08883d] <==
	
	
	==> coredns [8c007b4203a77d863613c0cec309dfd4be3b9e00d8d6b1045ad8da62e6c3f76d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [97447711830a410f35e6f784a446a2493096d223394c782fa94912b6ece7fdaf] <==
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-235652
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-235652
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:26:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-235652
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:27:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:27:45 +0000   Wed, 07 Aug 2024 19:26:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:27:45 +0000   Wed, 07 Aug 2024 19:26:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:27:45 +0000   Wed, 07 Aug 2024 19:26:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:27:45 +0000   Wed, 07 Aug 2024 19:26:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.208
	  Hostname:    kubernetes-upgrade-235652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164180Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164180Ki
	  pods:               110
	System Info:
	  Machine ID:                 61bc021f9a424551b2119712ed9db573
	  System UUID:                61bc021f-9a42-4551-b211-9712ed9db573
	  Boot ID:                    94eb7ea6-e460-409a-bb3c-6c96e606fd7c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-j5hvx                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     52s
	  kube-system                 coredns-6f6b679f8f-wq5wg                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     52s
	  kube-system                 etcd-kubernetes-upgrade-235652                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         53s
	  kube-system                 kube-apiserver-kubernetes-upgrade-235652             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-235652    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-proxy-zzs6j                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-scheduler-kubernetes-upgrade-235652             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  66s (x8 over 66s)  kubelet          Node kubernetes-upgrade-235652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node kubernetes-upgrade-235652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s (x7 over 66s)  kubelet          Node kubernetes-upgrade-235652 status is now: NodeHasSufficientPID
	  Normal  Starting                 66s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           55s                node-controller  Node kubernetes-upgrade-235652 event: Registered Node kubernetes-upgrade-235652 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-235652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-235652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-235652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-235652 event: Registered Node kubernetes-upgrade-235652 in Controller
	
	
	==> dmesg <==
	[  +1.654521] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.722723] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.076211] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071395] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.196346] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.179493] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.324090] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +4.428669] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +0.065522] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.054754] systemd-fstab-generator[853]: Ignoring "noauto" option for root device
	[ +14.336124] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[  +0.100095] kauditd_printk_skb: 97 callbacks suppressed
	[Aug 7 19:27] kauditd_printk_skb: 113 callbacks suppressed
	[  +0.808777] systemd-fstab-generator[2564]: Ignoring "noauto" option for root device
	[  +0.434977] systemd-fstab-generator[2749]: Ignoring "noauto" option for root device
	[  +0.366956] systemd-fstab-generator[2869]: Ignoring "noauto" option for root device
	[  +0.226179] systemd-fstab-generator[2888]: Ignoring "noauto" option for root device
	[  +0.638771] systemd-fstab-generator[3018]: Ignoring "noauto" option for root device
	[  +1.293075] systemd-fstab-generator[3355]: Ignoring "noauto" option for root device
	[  +1.296801] kauditd_printk_skb: 292 callbacks suppressed
	[  +1.855164] systemd-fstab-generator[3946]: Ignoring "noauto" option for root device
	[  +4.627602] kauditd_printk_skb: 50 callbacks suppressed
	[  +1.223482] systemd-fstab-generator[4471]: Ignoring "noauto" option for root device
	
	
	==> etcd [5d1ddcfba9261ab6533ba1f0c5290e4f4345a42f8679ffc54037977423d03821] <==
	{"level":"info","ts":"2024-08-07T19:27:42.521597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 switched to configuration voters=(3568824936382685312)"}
	{"level":"info","ts":"2024-08-07T19:27:42.521716Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9c0c31ebbc007527","local-member-id":"3187035f06a56080","added-peer-id":"3187035f06a56080","added-peer-peer-urls":["https://192.168.50.208:2380"]}
	{"level":"info","ts":"2024-08-07T19:27:42.521829Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9c0c31ebbc007527","local-member-id":"3187035f06a56080","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:27:42.521879Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:27:42.524186Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-07T19:27:42.524417Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3187035f06a56080","initial-advertise-peer-urls":["https://192.168.50.208:2380"],"listen-peer-urls":["https://192.168.50.208:2380"],"advertise-client-urls":["https://192.168.50.208:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.208:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T19:27:42.524458Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T19:27:42.524528Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.208:2380"}
	{"level":"info","ts":"2024-08-07T19:27:42.524550Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.208:2380"}
	{"level":"info","ts":"2024-08-07T19:27:43.989765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-07T19:27:43.989822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-07T19:27:43.989861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 received MsgPreVoteResp from 3187035f06a56080 at term 2"}
	{"level":"info","ts":"2024-08-07T19:27:43.989875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 became candidate at term 3"}
	{"level":"info","ts":"2024-08-07T19:27:43.989882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 received MsgVoteResp from 3187035f06a56080 at term 3"}
	{"level":"info","ts":"2024-08-07T19:27:43.989895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 became leader at term 3"}
	{"level":"info","ts":"2024-08-07T19:27:43.989902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3187035f06a56080 elected leader 3187035f06a56080 at term 3"}
	{"level":"info","ts":"2024-08-07T19:27:43.995687Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3187035f06a56080","local-member-attributes":"{Name:kubernetes-upgrade-235652 ClientURLs:[https://192.168.50.208:2379]}","request-path":"/0/members/3187035f06a56080/attributes","cluster-id":"9c0c31ebbc007527","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T19:27:43.995827Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:27:43.996130Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:27:43.996437Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T19:27:43.996494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T19:27:43.997347Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-07T19:27:43.998651Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T19:27:43.997345Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-07T19:27:43.999907Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.208:2379"}
	
	
	==> etcd [eed873fbca8f6faa5a7e9050b686bce55416b72a5810d123854f61a852180041] <==
	{"level":"info","ts":"2024-08-07T19:27:35.786066Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-07T19:27:35.843444Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9c0c31ebbc007527","local-member-id":"3187035f06a56080","commit-index":425}
	{"level":"info","ts":"2024-08-07T19:27:35.843786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-07T19:27:35.844054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 became follower at term 2"}
	{"level":"info","ts":"2024-08-07T19:27:35.844131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 3187035f06a56080 [peers: [], term: 2, commit: 425, applied: 0, lastindex: 425, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-07T19:27:35.852153Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-07T19:27:35.903423Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":410}
	{"level":"info","ts":"2024-08-07T19:27:35.908066Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-07T19:27:35.915485Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"3187035f06a56080","timeout":"7s"}
	{"level":"info","ts":"2024-08-07T19:27:35.915772Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"3187035f06a56080"}
	{"level":"info","ts":"2024-08-07T19:27:35.915811Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"3187035f06a56080","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-07T19:27:35.921466Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-07T19:27:35.921698Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:27:35.921741Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:27:35.921749Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:27:35.936848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 switched to configuration voters=(3568824936382685312)"}
	{"level":"info","ts":"2024-08-07T19:27:35.936954Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9c0c31ebbc007527","local-member-id":"3187035f06a56080","added-peer-id":"3187035f06a56080","added-peer-peer-urls":["https://192.168.50.208:2380"]}
	{"level":"info","ts":"2024-08-07T19:27:35.937098Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9c0c31ebbc007527","local-member-id":"3187035f06a56080","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:27:35.937129Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:27:35.959121Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-07T19:27:36.055552Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-07T19:27:36.059296Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3187035f06a56080","initial-advertise-peer-urls":["https://192.168.50.208:2380"],"listen-peer-urls":["https://192.168.50.208:2380"],"advertise-client-urls":["https://192.168.50.208:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.208:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T19:27:36.059086Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.208:2380"}
	{"level":"info","ts":"2024-08-07T19:27:36.065634Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T19:27:36.065995Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.208:2380"}
	
	
	==> kernel <==
	 19:27:49 up 1 min,  0 users,  load average: 1.65, 0.52, 0.18
	Linux kubernetes-upgrade-235652 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bb9b30801bd00211e2c024a64f8894c3cb1c10f4bac86eecbbf6e4bdcbe60078] <==
	I0807 19:27:45.416442       1 aggregator.go:171] initial CRD sync complete...
	I0807 19:27:45.416475       1 autoregister_controller.go:144] Starting autoregister controller
	I0807 19:27:45.416508       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0807 19:27:45.416617       1 cache.go:39] Caches are synced for autoregister controller
	I0807 19:27:45.424035       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0807 19:27:45.436240       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0807 19:27:45.452205       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0807 19:27:45.467553       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0807 19:27:45.469885       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 19:27:45.476359       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0807 19:27:45.477554       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0807 19:27:45.490909       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 19:27:45.491012       1 policy_source.go:224] refreshing policies
	I0807 19:27:45.493589       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 19:27:45.518136       1 shared_informer.go:320] Caches are synced for configmaps
	I0807 19:27:45.518149       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0807 19:27:45.518275       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0807 19:27:46.278412       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0807 19:27:46.995903       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 19:27:47.012376       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 19:27:47.081296       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 19:27:47.138412       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0807 19:27:47.146210       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0807 19:27:49.064224       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 19:27:49.114635       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [eaa9d7f9f3e33b3641fdd49a9afdd3d0f827012084e1aa00147da156b6f4664e] <==
	I0807 19:27:36.061651       1 options.go:228] external host was not specified, using 192.168.50.208
	I0807 19:27:36.071470       1 server.go:142] Version: v1.31.0-rc.0
	I0807 19:27:36.081030       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [0a9381b88349febdca21c5afe47140a7a9f5de2e5692dc2e794538f8d05ce9ff] <==
	I0807 19:27:48.723821       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0807 19:27:48.723859       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0807 19:27:48.723865       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0807 19:27:48.723871       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0807 19:27:48.723995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-235652"
	I0807 19:27:48.727428       1 shared_informer.go:320] Caches are synced for taint
	I0807 19:27:48.727559       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0807 19:27:48.727635       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-235652"
	I0807 19:27:48.727720       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0807 19:27:48.730400       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0807 19:27:48.739999       1 shared_informer.go:320] Caches are synced for GC
	I0807 19:27:48.751212       1 shared_informer.go:320] Caches are synced for persistent volume
	I0807 19:27:48.763156       1 shared_informer.go:320] Caches are synced for daemon sets
	I0807 19:27:48.792959       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0807 19:27:48.793705       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-235652"
	I0807 19:27:48.794821       1 shared_informer.go:320] Caches are synced for TTL
	I0807 19:27:48.810379       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 19:27:48.852612       1 shared_informer.go:320] Caches are synced for attach detach
	I0807 19:27:48.858264       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 19:27:48.897077       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0807 19:27:48.945737       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="294.866882ms"
	I0807 19:27:48.949533       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="142.598µs"
	I0807 19:27:49.356476       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 19:27:49.400377       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 19:27:49.400459       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [55d668d387dd027b7ff45c66dc06b6f48e8ce0f12c7b390db4a8ec6dcc6bc8a7] <==
	
	
	==> kube-proxy [af9ebd0355682142e47b39c34401845c400bad845e48023d840cd1b175f0a399] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0807 19:27:46.402584       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0807 19:27:46.425220       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.208"]
	E0807 19:27:46.425369       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0807 19:27:46.483054       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0807 19:27:46.483116       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 19:27:46.483151       1 server_linux.go:169] "Using iptables Proxier"
	I0807 19:27:46.487163       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0807 19:27:46.487962       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0807 19:27:46.488018       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:27:46.489197       1 config.go:197] "Starting service config controller"
	I0807 19:27:46.489261       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 19:27:46.489296       1 config.go:104] "Starting endpoint slice config controller"
	I0807 19:27:46.489312       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 19:27:46.489695       1 config.go:326] "Starting node config controller"
	I0807 19:27:46.490244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 19:27:46.590987       1 shared_informer.go:320] Caches are synced for node config
	I0807 19:27:46.591121       1 shared_informer.go:320] Caches are synced for service config
	I0807 19:27:46.591133       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e1ff41553dae6597e49b6d94bec2fbeac439632bbfcc6d691ba467ccc4f4d2ff] <==
	
	
	==> kube-scheduler [5f48044ef1c8df198d6a42d502b07f24ddeeabcc21d041372654bd74bcdd2076] <==
	
	
	==> kube-scheduler [e56857e18d0f78ae74825442a84c662401587f4f10090f3ebf10e87c494fde25] <==
	I0807 19:27:43.512617       1 serving.go:386] Generated self-signed cert in-memory
	W0807 19:27:45.285629       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 19:27:45.285807       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 19:27:45.285838       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 19:27:45.285900       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 19:27:45.407188       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
	I0807 19:27:45.407308       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:27:45.418675       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0807 19:27:45.418773       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:27:45.419505       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0807 19:27:45.419638       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W0807 19:27:45.429606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0807 19:27:45.432033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W0807 19:27:45.430340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0807 19:27:45.432155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W0807 19:27:45.430386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0807 19:27:45.432214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W0807 19:27:45.430423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0807 19:27:45.432274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	W0807 19:27:45.430459       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0807 19:27:45.432351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W0807 19:27:45.430492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E0807 19:27:45.432411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	I0807 19:27:45.519870       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 19:27:41 kubernetes-upgrade-235652 kubelet[3953]: E0807 19:27:41.864023    3953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-235652?timeout=10s\": dial tcp 192.168.50.208:8443: connect: connection refused" interval="400ms"
	Aug 07 19:27:42 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:42.058366    3953 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-235652"
	Aug 07 19:27:42 kubernetes-upgrade-235652 kubelet[3953]: E0807 19:27:42.059287    3953 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.208:8443: connect: connection refused" node="kubernetes-upgrade-235652"
	Aug 07 19:27:42 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:42.141848    3953 scope.go:117] "RemoveContainer" containerID="eed873fbca8f6faa5a7e9050b686bce55416b72a5810d123854f61a852180041"
	Aug 07 19:27:42 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:42.145763    3953 scope.go:117] "RemoveContainer" containerID="55d668d387dd027b7ff45c66dc06b6f48e8ce0f12c7b390db4a8ec6dcc6bc8a7"
	Aug 07 19:27:42 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:42.146077    3953 scope.go:117] "RemoveContainer" containerID="eaa9d7f9f3e33b3641fdd49a9afdd3d0f827012084e1aa00147da156b6f4664e"
	Aug 07 19:27:42 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:42.147893    3953 scope.go:117] "RemoveContainer" containerID="5f48044ef1c8df198d6a42d502b07f24ddeeabcc21d041372654bd74bcdd2076"
	Aug 07 19:27:42 kubernetes-upgrade-235652 kubelet[3953]: E0807 19:27:42.265683    3953 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-235652?timeout=10s\": dial tcp 192.168.50.208:8443: connect: connection refused" interval="800ms"
	Aug 07 19:27:42 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:42.461249    3953 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-235652"
	Aug 07 19:27:42 kubernetes-upgrade-235652 kubelet[3953]: E0807 19:27:42.462230    3953 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.208:8443: connect: connection refused" node="kubernetes-upgrade-235652"
	Aug 07 19:27:43 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:43.265465    3953 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-235652"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.530726    3953 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-235652"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.530820    3953 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-235652"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.530843    3953 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.531886    3953 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.646988    3953 apiserver.go:52] "Watching apiserver"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.656907    3953 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.658411    3953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1aa5ab6-7255-494a-a4f5-613b9296f1d8-xtables-lock\") pod \"kube-proxy-zzs6j\" (UID: \"c1aa5ab6-7255-494a-a4f5-613b9296f1d8\") " pod="kube-system/kube-proxy-zzs6j"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.658574    3953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ee738ef3-5e02-4f9b-a52f-1ac7c67aad38-tmp\") pod \"storage-provisioner\" (UID: \"ee738ef3-5e02-4f9b-a52f-1ac7c67aad38\") " pod="kube-system/storage-provisioner"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.658684    3953 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1aa5ab6-7255-494a-a4f5-613b9296f1d8-lib-modules\") pod \"kube-proxy-zzs6j\" (UID: \"c1aa5ab6-7255-494a-a4f5-613b9296f1d8\") " pod="kube-system/kube-proxy-zzs6j"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: E0807 19:27:45.872327    3953 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-235652\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-235652"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.951670    3953 scope.go:117] "RemoveContainer" containerID="50a1f4a78d82dc0fcb2639480a5a7bf0b4efdf8a0b6d85fdcfae9b4ecd08883d"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.951860    3953 scope.go:117] "RemoveContainer" containerID="e1ff41553dae6597e49b6d94bec2fbeac439632bbfcc6d691ba467ccc4f4d2ff"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.952033    3953 scope.go:117] "RemoveContainer" containerID="97447711830a410f35e6f784a446a2493096d223394c782fa94912b6ece7fdaf"
	Aug 07 19:27:45 kubernetes-upgrade-235652 kubelet[3953]: I0807 19:27:45.952223    3953 scope.go:117] "RemoveContainer" containerID="4d525a4fd11daa89088046ae42f1736bf2c98d9f88b379734a4a10e0d73f9db0"
	
	
	==> storage-provisioner [4d525a4fd11daa89088046ae42f1736bf2c98d9f88b379734a4a10e0d73f9db0] <==
	I0807 19:27:36.267110       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [6595539c4c15020032f86c913baa82c621dd2f594088552f248743089431c562] <==
	I0807 19:27:46.126493       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0807 19:27:46.149570       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0807 19:27:46.149677       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 19:27:48.715989   77779 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19389-20864/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-235652 -n kubernetes-upgrade-235652
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-235652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-235652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-235652
--- FAIL: TestKubernetesUpgrade (429.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (71.73s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-302295 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-302295 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.614937s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-302295] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-302295" primary control-plane node in "pause-302295" cluster
	* Updating the running kvm2 "pause-302295" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-302295" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 19:25:24.378856   75607 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:25:24.379012   75607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:25:24.379023   75607 out.go:304] Setting ErrFile to fd 2...
	I0807 19:25:24.379030   75607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:25:24.379339   75607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 19:25:24.380039   75607 out.go:298] Setting JSON to false
	I0807 19:25:24.381317   75607 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11270,"bootTime":1723047454,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 19:25:24.381397   75607 start.go:139] virtualization: kvm guest
	I0807 19:25:24.488765   75607 out.go:177] * [pause-302295] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 19:25:24.654545   75607 notify.go:220] Checking for updates...
	I0807 19:25:24.747108   75607 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:25:24.955947   75607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:25:25.096079   75607 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 19:25:25.222194   75607 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:25:25.381445   75607 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 19:25:25.441799   75607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:25:25.523319   75607 config.go:182] Loaded profile config "pause-302295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:25:25.523682   75607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:25:25.523738   75607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:25:25.538509   75607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0807 19:25:25.539056   75607 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:25:25.539635   75607 main.go:141] libmachine: Using API Version  1
	I0807 19:25:25.539661   75607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:25:25.540016   75607 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:25:25.540258   75607 main.go:141] libmachine: (pause-302295) Calling .DriverName
	I0807 19:25:25.540513   75607 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:25:25.540826   75607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:25:25.540869   75607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:25:25.555491   75607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0807 19:25:25.555923   75607 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:25:25.556540   75607 main.go:141] libmachine: Using API Version  1
	I0807 19:25:25.556570   75607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:25:25.556911   75607 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:25:25.557092   75607 main.go:141] libmachine: (pause-302295) Calling .DriverName
	I0807 19:25:25.656039   75607 out.go:177] * Using the kvm2 driver based on existing profile
	I0807 19:25:25.739676   75607 start.go:297] selected driver: kvm2
	I0807 19:25:25.739699   75607 start.go:901] validating driver "kvm2" against &{Name:pause-302295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:25:25.739898   75607 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:25:25.740377   75607 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:25:25.740476   75607 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 19:25:25.756243   75607 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 19:25:25.756944   75607 cni.go:84] Creating CNI manager for ""
	I0807 19:25:25.756959   75607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:25:25.757032   75607 start.go:340] cluster config:
	{Name:pause-302295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-302295 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:25:25.757173   75607 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:25:25.844288   75607 out.go:177] * Starting "pause-302295" primary control-plane node in "pause-302295" cluster
	I0807 19:25:25.863636   75607 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:25:25.863707   75607 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 19:25:25.863719   75607 cache.go:56] Caching tarball of preloaded images
	I0807 19:25:25.863820   75607 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 19:25:25.863835   75607 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 19:25:25.864003   75607 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/config.json ...
	I0807 19:25:25.918360   75607 start.go:360] acquireMachinesLock for pause-302295: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 19:25:42.325143   75607 start.go:364] duration metric: took 16.406742612s to acquireMachinesLock for "pause-302295"
	I0807 19:25:42.325191   75607 start.go:96] Skipping create...Using existing machine configuration
	I0807 19:25:42.325200   75607 fix.go:54] fixHost starting: 
	I0807 19:25:42.325582   75607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:25:42.325631   75607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:25:42.342498   75607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0807 19:25:42.342857   75607 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:25:42.343332   75607 main.go:141] libmachine: Using API Version  1
	I0807 19:25:42.343354   75607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:25:42.343654   75607 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:25:42.343860   75607 main.go:141] libmachine: (pause-302295) Calling .DriverName
	I0807 19:25:42.344021   75607 main.go:141] libmachine: (pause-302295) Calling .GetState
	I0807 19:25:42.345738   75607 fix.go:112] recreateIfNeeded on pause-302295: state=Running err=<nil>
	W0807 19:25:42.345760   75607 fix.go:138] unexpected machine state, will restart: <nil>
	I0807 19:25:42.347758   75607 out.go:177] * Updating the running kvm2 "pause-302295" VM ...
	I0807 19:25:42.349070   75607 machine.go:94] provisionDockerMachine start ...
	I0807 19:25:42.349092   75607 main.go:141] libmachine: (pause-302295) Calling .DriverName
	I0807 19:25:42.349292   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:42.352116   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.352559   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:42.352593   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.352743   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHPort
	I0807 19:25:42.352907   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:42.353081   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:42.353247   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHUsername
	I0807 19:25:42.353418   75607 main.go:141] libmachine: Using SSH client type: native
	I0807 19:25:42.353647   75607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I0807 19:25:42.353663   75607 main.go:141] libmachine: About to run SSH command:
	hostname
	I0807 19:25:42.469967   75607 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-302295
	
	I0807 19:25:42.470011   75607 main.go:141] libmachine: (pause-302295) Calling .GetMachineName
	I0807 19:25:42.470275   75607 buildroot.go:166] provisioning hostname "pause-302295"
	I0807 19:25:42.470297   75607 main.go:141] libmachine: (pause-302295) Calling .GetMachineName
	I0807 19:25:42.470507   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:42.473611   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.474050   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:42.474090   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.474219   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHPort
	I0807 19:25:42.474400   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:42.474598   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:42.474751   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHUsername
	I0807 19:25:42.474915   75607 main.go:141] libmachine: Using SSH client type: native
	I0807 19:25:42.475119   75607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I0807 19:25:42.475136   75607 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-302295 && echo "pause-302295" | sudo tee /etc/hostname
	I0807 19:25:42.609732   75607 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-302295
	
	I0807 19:25:42.609762   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:42.612901   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.613287   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:42.613320   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.613544   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHPort
	I0807 19:25:42.613730   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:42.613906   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:42.614126   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHUsername
	I0807 19:25:42.614331   75607 main.go:141] libmachine: Using SSH client type: native
	I0807 19:25:42.614509   75607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I0807 19:25:42.614531   75607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-302295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-302295/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-302295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0807 19:25:42.734205   75607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0807 19:25:42.734239   75607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19389-20864/.minikube CaCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19389-20864/.minikube}
	I0807 19:25:42.734284   75607 buildroot.go:174] setting up certificates
	I0807 19:25:42.734303   75607 provision.go:84] configureAuth start
	I0807 19:25:42.734338   75607 main.go:141] libmachine: (pause-302295) Calling .GetMachineName
	I0807 19:25:42.734655   75607 main.go:141] libmachine: (pause-302295) Calling .GetIP
	I0807 19:25:42.737986   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.738385   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:42.738475   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.738779   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:42.741666   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.742082   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:42.742111   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.742320   75607 provision.go:143] copyHostCerts
	I0807 19:25:42.742390   75607 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem, removing ...
	I0807 19:25:42.742403   75607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem
	I0807 19:25:42.742476   75607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/ca.pem (1082 bytes)
	I0807 19:25:42.742616   75607 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem, removing ...
	I0807 19:25:42.742628   75607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem
	I0807 19:25:42.742662   75607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/cert.pem (1123 bytes)
	I0807 19:25:42.742774   75607 exec_runner.go:144] found /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem, removing ...
	I0807 19:25:42.742786   75607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem
	I0807 19:25:42.742815   75607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19389-20864/.minikube/key.pem (1679 bytes)
	I0807 19:25:42.742907   75607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem org=jenkins.pause-302295 san=[127.0.0.1 192.168.61.241 localhost minikube pause-302295]
	I0807 19:25:42.856757   75607 provision.go:177] copyRemoteCerts
	I0807 19:25:42.856818   75607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0807 19:25:42.856839   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:42.859836   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.860346   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:42.860388   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:42.860594   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHPort
	I0807 19:25:42.860818   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:42.861018   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHUsername
	I0807 19:25:42.861175   75607 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/pause-302295/id_rsa Username:docker}
	I0807 19:25:42.952703   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0807 19:25:42.985391   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0807 19:25:43.013245   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0807 19:25:43.044567   75607 provision.go:87] duration metric: took 310.240495ms to configureAuth
	I0807 19:25:43.044616   75607 buildroot.go:189] setting minikube options for container-runtime
	I0807 19:25:43.044869   75607 config.go:182] Loaded profile config "pause-302295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:25:43.044966   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:43.048276   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:43.048679   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:43.048701   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:43.048921   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHPort
	I0807 19:25:43.049135   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:43.049349   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:43.049490   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHUsername
	I0807 19:25:43.049662   75607 main.go:141] libmachine: Using SSH client type: native
	I0807 19:25:43.049877   75607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I0807 19:25:43.049898   75607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0807 19:25:49.944465   75607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0807 19:25:49.944494   75607 machine.go:97] duration metric: took 7.595409327s to provisionDockerMachine
	I0807 19:25:49.944511   75607 start.go:293] postStartSetup for "pause-302295" (driver="kvm2")
	I0807 19:25:49.944523   75607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0807 19:25:49.944543   75607 main.go:141] libmachine: (pause-302295) Calling .DriverName
	I0807 19:25:49.944902   75607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0807 19:25:49.944935   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:49.947858   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:49.948244   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:49.948270   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:49.948443   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHPort
	I0807 19:25:49.948637   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:49.948774   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHUsername
	I0807 19:25:49.948892   75607 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/pause-302295/id_rsa Username:docker}
	I0807 19:25:50.034828   75607 ssh_runner.go:195] Run: cat /etc/os-release
	I0807 19:25:50.038948   75607 info.go:137] Remote host: Buildroot 2023.02.9
	I0807 19:25:50.038972   75607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/addons for local assets ...
	I0807 19:25:50.039027   75607 filesync.go:126] Scanning /home/jenkins/minikube-integration/19389-20864/.minikube/files for local assets ...
	I0807 19:25:50.039097   75607 filesync.go:149] local asset: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem -> 280522.pem in /etc/ssl/certs
	I0807 19:25:50.039222   75607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0807 19:25:50.048777   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:25:50.072870   75607 start.go:296] duration metric: took 128.34448ms for postStartSetup
	I0807 19:25:50.072916   75607 fix.go:56] duration metric: took 7.747716941s for fixHost
	I0807 19:25:50.072940   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:50.075608   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:50.075998   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:50.076030   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:50.076198   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHPort
	I0807 19:25:50.076424   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:50.076595   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:50.076747   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHUsername
	I0807 19:25:50.076915   75607 main.go:141] libmachine: Using SSH client type: native
	I0807 19:25:50.077130   75607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.241 22 <nil> <nil>}
	I0807 19:25:50.077146   75607 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0807 19:25:50.189126   75607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723058750.180399447
	
	I0807 19:25:50.189152   75607 fix.go:216] guest clock: 1723058750.180399447
	I0807 19:25:50.189172   75607 fix.go:229] Guest: 2024-08-07 19:25:50.180399447 +0000 UTC Remote: 2024-08-07 19:25:50.072920838 +0000 UTC m=+25.738935742 (delta=107.478609ms)
	I0807 19:25:50.189202   75607 fix.go:200] guest clock delta is within tolerance: 107.478609ms
	I0807 19:25:50.189211   75607 start.go:83] releasing machines lock for "pause-302295", held for 7.86403937s
	I0807 19:25:50.189242   75607 main.go:141] libmachine: (pause-302295) Calling .DriverName
	I0807 19:25:50.189529   75607 main.go:141] libmachine: (pause-302295) Calling .GetIP
	I0807 19:25:50.192773   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:50.193271   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:50.193304   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:50.193477   75607 main.go:141] libmachine: (pause-302295) Calling .DriverName
	I0807 19:25:50.194075   75607 main.go:141] libmachine: (pause-302295) Calling .DriverName
	I0807 19:25:50.194285   75607 main.go:141] libmachine: (pause-302295) Calling .DriverName
	I0807 19:25:50.194372   75607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0807 19:25:50.194411   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:50.194549   75607 ssh_runner.go:195] Run: cat /version.json
	I0807 19:25:50.194575   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHHostname
	I0807 19:25:50.197245   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:50.197572   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:50.197603   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:50.197639   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:50.197803   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHPort
	I0807 19:25:50.197997   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:50.198039   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:25:50.198074   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:25:50.198166   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHUsername
	I0807 19:25:50.198248   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHPort
	I0807 19:25:50.198338   75607 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/pause-302295/id_rsa Username:docker}
	I0807 19:25:50.198412   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHKeyPath
	I0807 19:25:50.198511   75607 main.go:141] libmachine: (pause-302295) Calling .GetSSHUsername
	I0807 19:25:50.198610   75607 sshutil.go:53] new ssh client: &{IP:192.168.61.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/pause-302295/id_rsa Username:docker}
	I0807 19:25:50.286167   75607 ssh_runner.go:195] Run: systemctl --version
	I0807 19:25:50.310597   75607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0807 19:25:50.465691   75607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0807 19:25:50.477983   75607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0807 19:25:50.478077   75607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0807 19:25:50.536596   75607 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0807 19:25:50.536675   75607 start.go:495] detecting cgroup driver to use...
	I0807 19:25:50.536778   75607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0807 19:25:50.605534   75607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0807 19:25:50.661756   75607 docker.go:217] disabling cri-docker service (if available) ...
	I0807 19:25:50.661819   75607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0807 19:25:50.716164   75607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0807 19:25:50.754231   75607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0807 19:25:50.969062   75607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0807 19:25:51.143378   75607 docker.go:233] disabling docker service ...
	I0807 19:25:51.143441   75607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0807 19:25:51.230356   75607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0807 19:25:51.296906   75607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0807 19:25:51.594910   75607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0807 19:25:51.833452   75607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0807 19:25:51.857920   75607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0807 19:25:51.882741   75607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0807 19:25:51.882818   75607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:25:51.898343   75607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0807 19:25:51.898411   75607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:25:51.911659   75607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:25:51.924347   75607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:25:51.947240   75607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0807 19:25:51.963656   75607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:25:51.986771   75607 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:25:52.004254   75607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0807 19:25:52.020084   75607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0807 19:25:52.031769   75607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0807 19:25:52.045436   75607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:25:52.252958   75607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0807 19:26:02.575681   75607 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.322683655s)
	I0807 19:26:02.575726   75607 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0807 19:26:02.575785   75607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0807 19:26:02.585913   75607 start.go:563] Will wait 60s for crictl version
	I0807 19:26:02.585982   75607 ssh_runner.go:195] Run: which crictl
	I0807 19:26:02.590102   75607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0807 19:26:02.637395   75607 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0807 19:26:02.637477   75607 ssh_runner.go:195] Run: crio --version
	I0807 19:26:02.678637   75607 ssh_runner.go:195] Run: crio --version
	I0807 19:26:02.710904   75607 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0807 19:26:02.712307   75607 main.go:141] libmachine: (pause-302295) Calling .GetIP
	I0807 19:26:03.224740   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:26:03.225122   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:26:03.225150   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:26:03.225337   75607 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0807 19:26:03.230334   75607 kubeadm.go:883] updating cluster {Name:pause-302295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:26:03.230466   75607 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:26:03.230603   75607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:26:03.277296   75607 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:26:03.277328   75607 crio.go:433] Images already preloaded, skipping extraction
	I0807 19:26:03.277385   75607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:26:03.321611   75607 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:26:03.321638   75607 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:26:03.321647   75607 kubeadm.go:934] updating node { 192.168.61.241 8443 v1.30.3 crio true true} ...
	I0807 19:26:03.321790   75607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-302295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:26:03.321880   75607 ssh_runner.go:195] Run: crio config
	I0807 19:26:03.381852   75607 cni.go:84] Creating CNI manager for ""
	I0807 19:26:03.381875   75607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:26:03.381890   75607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:26:03.381911   75607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.241 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-302295 NodeName:pause-302295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 19:26:03.382102   75607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-302295"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:26:03.382170   75607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 19:26:03.392920   75607 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:26:03.392997   75607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:26:03.404763   75607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0807 19:26:03.425460   75607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:26:03.448959   75607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0807 19:26:03.469750   75607 ssh_runner.go:195] Run: grep 192.168.61.241	control-plane.minikube.internal$ /etc/hosts
	I0807 19:26:03.475140   75607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:26:03.643968   75607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:26:03.691717   75607 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295 for IP: 192.168.61.241
	I0807 19:26:03.691744   75607 certs.go:194] generating shared ca certs ...
	I0807 19:26:03.691764   75607 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:26:03.691967   75607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 19:26:03.692024   75607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 19:26:03.692037   75607 certs.go:256] generating profile certs ...
	I0807 19:26:03.692157   75607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/client.key
	I0807 19:26:03.692267   75607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/apiserver.key.6b5e59d7
	I0807 19:26:03.692332   75607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/proxy-client.key
	I0807 19:26:03.692494   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 19:26:03.692538   75607 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 19:26:03.692548   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 19:26:03.692577   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:26:03.692602   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:26:03.692625   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 19:26:03.692661   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:26:03.693801   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:26:03.791976   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:26:03.950489   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:26:04.049661   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 19:26:04.186457   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0807 19:26:04.426426   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 19:26:04.495097   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:26:04.556999   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0807 19:26:04.630846   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 19:26:04.740723   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:26:04.839089   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 19:26:04.877778   75607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:26:04.898263   75607 ssh_runner.go:195] Run: openssl version
	I0807 19:26:04.904545   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 19:26:04.916694   75607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 19:26:04.921438   75607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:26:04.921488   75607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 19:26:04.927075   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:26:04.939585   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:26:04.953004   75607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:26:04.958001   75607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:26:04.958074   75607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:26:04.964084   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:26:04.981599   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 19:26:05.011586   75607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 19:26:05.016774   75607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:26:05.016845   75607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 19:26:05.025524   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 19:26:05.038771   75607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:26:05.043432   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 19:26:05.051518   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 19:26:05.058464   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 19:26:05.066197   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 19:26:05.071751   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 19:26:05.078516   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 19:26:05.086322   75607 kubeadm.go:392] StartCluster: {Name:pause-302295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:26:05.086469   75607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 19:26:05.086549   75607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:26:05.145023   75607 cri.go:89] found id: "bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d"
	I0807 19:26:05.145049   75607 cri.go:89] found id: "96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3"
	I0807 19:26:05.145054   75607 cri.go:89] found id: "b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955"
	I0807 19:26:05.145059   75607 cri.go:89] found id: "abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876"
	I0807 19:26:05.145063   75607 cri.go:89] found id: "8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91"
	I0807 19:26:05.145067   75607 cri.go:89] found id: "2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79"
	I0807 19:26:05.145071   75607 cri.go:89] found id: "c204c8d69ed7fc61e972cd8cd369ba304873a7e82aebfbbd272e6c255d7b2dac"
	I0807 19:26:05.145075   75607 cri.go:89] found id: "3c036b1106ca4f92d2d108bffc32c6b42a8557ed77c520f8aa8271f8febb2aba"
	I0807 19:26:05.145078   75607 cri.go:89] found id: "707f2136588365e52be0d52c2206d61e9573762ca3bf91c260fbb0faae2208ef"
	I0807 19:26:05.145095   75607 cri.go:89] found id: "c157405d56a05550fbdc4090412abe258b9c454e17e1853e4426bfa199feff54"
	I0807 19:26:05.145099   75607 cri.go:89] found id: "36d4d11bec1762a447ed6a0dde886a8509f446c7e9d2a88f4a92c6ca5565446b"
	I0807 19:26:05.145102   75607 cri.go:89] found id: ""
	I0807 19:26:05.145154   75607 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-302295 -n pause-302295
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-302295 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-302295 logs -n 25: (1.401049205s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo cat              | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo cat              | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo find             | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo crio             | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-853483                       | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:24 UTC |
	| start   | -p pause-302295 --memory=2048          | pause-302295              | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:25 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-252907              | running-upgrade-252907    | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:24 UTC |
	| start   | -p cert-options-405893                 | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:25 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-493959            | force-systemd-env-493959  | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:24 UTC |
	| start   | -p force-systemd-flag-992969           | force-systemd-flag-992969 | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:26 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-302295                        | pause-302295              | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:26 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-405893 ssh                | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-405893 -- sudo         | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-405893                 | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	| start   | -p cert-expiration-260571              | cert-expiration-260571    | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-235652           | kubernetes-upgrade-235652 | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	| start   | -p kubernetes-upgrade-235652           | kubernetes-upgrade-235652 | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-992969 ssh cat      | force-systemd-flag-992969 | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC | 07 Aug 24 19:26 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-992969           | force-systemd-flag-992969 | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC | 07 Aug 24 19:26 UTC |
	| start   | -p auto-853483 --memory=3072           | auto-853483               | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 19:26:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 19:26:03.435917   76375 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:26:03.436281   76375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:26:03.436311   76375 out.go:304] Setting ErrFile to fd 2...
	I0807 19:26:03.436322   76375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:26:03.436647   76375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 19:26:03.437513   76375 out.go:298] Setting JSON to false
	I0807 19:26:03.438855   76375 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11309,"bootTime":1723047454,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 19:26:03.438946   76375 start.go:139] virtualization: kvm guest
	I0807 19:26:03.441279   76375 out.go:177] * [auto-853483] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 19:26:03.442669   76375 notify.go:220] Checking for updates...
	I0807 19:26:03.442684   76375 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:26:03.443935   76375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:26:03.445256   76375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 19:26:03.446411   76375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:26:03.447586   76375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 19:26:03.448935   76375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:26:03.450902   76375 config.go:182] Loaded profile config "cert-expiration-260571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:26:03.451036   76375 config.go:182] Loaded profile config "kubernetes-upgrade-235652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0807 19:26:03.451241   76375 config.go:182] Loaded profile config "pause-302295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:26:03.451367   76375 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:26:03.492939   76375 out.go:177] * Using the kvm2 driver based on user configuration
	I0807 19:26:03.494059   76375 start.go:297] selected driver: kvm2
	I0807 19:26:03.494071   76375 start.go:901] validating driver "kvm2" against <nil>
	I0807 19:26:03.494083   76375 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:26:03.494860   76375 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:26:03.494956   76375 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 19:26:03.511045   76375 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 19:26:03.511130   76375 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 19:26:03.511433   76375 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 19:26:03.511474   76375 cni.go:84] Creating CNI manager for ""
	I0807 19:26:03.511485   76375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:26:03.511495   76375 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 19:26:03.511604   76375 start.go:340] cluster config:
	{Name:auto-853483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-853483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:26:03.511738   76375 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:26:03.513604   76375 out.go:177] * Starting "auto-853483" primary control-plane node in "auto-853483" cluster
	I0807 19:26:02.712307   75607 main.go:141] libmachine: (pause-302295) Calling .GetIP
	I0807 19:26:03.224740   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:26:03.225122   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:26:03.225150   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:26:03.225337   75607 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0807 19:26:03.230334   75607 kubeadm.go:883] updating cluster {Name:pause-302295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:26:03.230466   75607 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:26:03.230603   75607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:26:03.277296   75607 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:26:03.277328   75607 crio.go:433] Images already preloaded, skipping extraction
	I0807 19:26:03.277385   75607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:26:03.321611   75607 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:26:03.321638   75607 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:26:03.321647   75607 kubeadm.go:934] updating node { 192.168.61.241 8443 v1.30.3 crio true true} ...
	I0807 19:26:03.321790   75607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-302295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:26:03.321880   75607 ssh_runner.go:195] Run: crio config
	I0807 19:26:03.381852   75607 cni.go:84] Creating CNI manager for ""
	I0807 19:26:03.381875   75607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:26:03.381890   75607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:26:03.381911   75607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.241 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-302295 NodeName:pause-302295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 19:26:03.382102   75607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-302295"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:26:03.382170   75607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 19:26:03.392920   75607 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:26:03.392997   75607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:26:03.404763   75607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0807 19:26:03.425460   75607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:26:03.448959   75607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0807 19:26:03.469750   75607 ssh_runner.go:195] Run: grep 192.168.61.241	control-plane.minikube.internal$ /etc/hosts
	I0807 19:26:03.475140   75607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:26:03.643968   75607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:26:03.691717   75607 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295 for IP: 192.168.61.241
	I0807 19:26:03.691744   75607 certs.go:194] generating shared ca certs ...
	I0807 19:26:03.691764   75607 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:26:03.691967   75607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 19:26:03.692024   75607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 19:26:03.692037   75607 certs.go:256] generating profile certs ...
	I0807 19:26:03.692157   75607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/client.key
	I0807 19:26:03.692267   75607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/apiserver.key.6b5e59d7
	I0807 19:26:03.692332   75607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/proxy-client.key
	I0807 19:26:03.692494   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 19:26:03.692538   75607 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 19:26:03.692548   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 19:26:03.692577   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:26:03.692602   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:26:03.692625   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 19:26:03.692661   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:26:03.693801   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:26:03.791976   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:26:03.950489   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:26:04.049661   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 19:26:04.186457   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0807 19:26:02.568229   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | domain cert-expiration-260571 has defined MAC address 52:54:00:5d:7b:38 in network mk-cert-expiration-260571
	I0807 19:26:02.630374   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | unable to find current IP address of domain cert-expiration-260571 in network mk-cert-expiration-260571
	I0807 19:26:02.630394   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | I0807 19:26:02.630259   76086 retry.go:31] will retry after 3.386305406s: waiting for machine to come up
	I0807 19:26:06.018338   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | domain cert-expiration-260571 has defined MAC address 52:54:00:5d:7b:38 in network mk-cert-expiration-260571
	I0807 19:26:06.018823   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | unable to find current IP address of domain cert-expiration-260571 in network mk-cert-expiration-260571
	I0807 19:26:06.018844   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | I0807 19:26:06.018775   76086 retry.go:31] will retry after 2.985033846s: waiting for machine to come up
	I0807 19:26:03.514823   76375 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:26:03.514865   76375 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 19:26:03.514880   76375 cache.go:56] Caching tarball of preloaded images
	I0807 19:26:03.514989   76375 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 19:26:03.515004   76375 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 19:26:03.515124   76375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/config.json ...
	I0807 19:26:03.515154   76375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/config.json: {Name:mk35c2e692d3cd2487cad8614e499b7b37f334e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:26:03.515323   76375 start.go:360] acquireMachinesLock for auto-853483: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 19:26:04.426426   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 19:26:04.495097   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:26:04.556999   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0807 19:26:04.630846   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 19:26:04.740723   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:26:04.839089   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 19:26:04.877778   75607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:26:04.898263   75607 ssh_runner.go:195] Run: openssl version
	I0807 19:26:04.904545   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 19:26:04.916694   75607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 19:26:04.921438   75607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:26:04.921488   75607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 19:26:04.927075   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:26:04.939585   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:26:04.953004   75607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:26:04.958001   75607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:26:04.958074   75607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:26:04.964084   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:26:04.981599   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 19:26:05.011586   75607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 19:26:05.016774   75607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:26:05.016845   75607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 19:26:05.025524   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 19:26:05.038771   75607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:26:05.043432   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 19:26:05.051518   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 19:26:05.058464   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 19:26:05.066197   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 19:26:05.071751   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 19:26:05.078516   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 19:26:05.086322   75607 kubeadm.go:392] StartCluster: {Name:pause-302295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:26:05.086469   75607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 19:26:05.086549   75607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:26:05.145023   75607 cri.go:89] found id: "bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d"
	I0807 19:26:05.145049   75607 cri.go:89] found id: "96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3"
	I0807 19:26:05.145054   75607 cri.go:89] found id: "b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955"
	I0807 19:26:05.145059   75607 cri.go:89] found id: "abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876"
	I0807 19:26:05.145063   75607 cri.go:89] found id: "8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91"
	I0807 19:26:05.145067   75607 cri.go:89] found id: "2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79"
	I0807 19:26:05.145071   75607 cri.go:89] found id: "c204c8d69ed7fc61e972cd8cd369ba304873a7e82aebfbbd272e6c255d7b2dac"
	I0807 19:26:05.145075   75607 cri.go:89] found id: "3c036b1106ca4f92d2d108bffc32c6b42a8557ed77c520f8aa8271f8febb2aba"
	I0807 19:26:05.145078   75607 cri.go:89] found id: "707f2136588365e52be0d52c2206d61e9573762ca3bf91c260fbb0faae2208ef"
	I0807 19:26:05.145095   75607 cri.go:89] found id: "c157405d56a05550fbdc4090412abe258b9c454e17e1853e4426bfa199feff54"
	I0807 19:26:05.145099   75607 cri.go:89] found id: "36d4d11bec1762a447ed6a0dde886a8509f446c7e9d2a88f4a92c6ca5565446b"
	I0807 19:26:05.145102   75607 cri.go:89] found id: ""
	I0807 19:26:05.145154   75607 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.550272636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058792550249154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6eb4770-17e9-49a4-8144-35c1e9b5b1dd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.550829693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84c42d50-d650-46c7-b446-4fce98925803 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.550879584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84c42d50-d650-46c7-b446-4fce98925803 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.551200537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52,PodSandboxId:95c5152c0c9f9a197009bd0e66533badc87df62e5fca23c1e4c8d279ea2f5f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058775228404825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723058775232695118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea655,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723058771377056803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723058771361193202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f
2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723058771383572759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723058771365469709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io
.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723058764120966876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea6
55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723058764253455193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723058764118791236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCoun
t: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723058764049806694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723058763981879714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79,PodSandboxId:955fab1afeba52e02f73e15f29fa06d773d51012358f12b092883da21dba9fa8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058751689676543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84c42d50-d650-46c7-b446-4fce98925803 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.592592058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e4fc0f7-f840-4bb2-8725-2b95c81e020e name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.592664158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e4fc0f7-f840-4bb2-8725-2b95c81e020e name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.593933640Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc806164-0120-4707-aecb-3898115621d4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.594398547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058792594369569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc806164-0120-4707-aecb-3898115621d4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.595014358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=724dbd7b-ebeb-4c69-a05e-ab85af0f41c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.595066440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=724dbd7b-ebeb-4c69-a05e-ab85af0f41c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.595439480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52,PodSandboxId:95c5152c0c9f9a197009bd0e66533badc87df62e5fca23c1e4c8d279ea2f5f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058775228404825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723058775232695118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea655,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723058771377056803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723058771361193202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f
2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723058771383572759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723058771365469709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io
.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723058764120966876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea6
55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723058764253455193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723058764118791236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCoun
t: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723058764049806694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723058763981879714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79,PodSandboxId:955fab1afeba52e02f73e15f29fa06d773d51012358f12b092883da21dba9fa8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058751689676543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=724dbd7b-ebeb-4c69-a05e-ab85af0f41c9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.642348106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5d12b35-c6a7-401c-a521-d9b35423d73e name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.642466831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5d12b35-c6a7-401c-a521-d9b35423d73e name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.644332971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3bbbf96a-83bd-41a9-85bb-55e4a3f2293f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.644947822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058792644911058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3bbbf96a-83bd-41a9-85bb-55e4a3f2293f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.645516912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0b9817d-4812-4929-8c4a-f43479fe473d name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.645570219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0b9817d-4812-4929-8c4a-f43479fe473d name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.645848517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52,PodSandboxId:95c5152c0c9f9a197009bd0e66533badc87df62e5fca23c1e4c8d279ea2f5f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058775228404825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723058775232695118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea655,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723058771377056803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723058771361193202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f
2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723058771383572759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723058771365469709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io
.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723058764120966876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea6
55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723058764253455193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723058764118791236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCoun
t: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723058764049806694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723058763981879714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79,PodSandboxId:955fab1afeba52e02f73e15f29fa06d773d51012358f12b092883da21dba9fa8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058751689676543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0b9817d-4812-4929-8c4a-f43479fe473d name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.688687700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a1bf0c5-15ec-499a-90c3-3228f22574a5 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.688786899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a1bf0c5-15ec-499a-90c3-3228f22574a5 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.689864406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d124e66a-a748-437b-831e-37e9fc20b733 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.690358390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058792690332910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d124e66a-a748-437b-831e-37e9fc20b733 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.692744240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8737bb1a-2182-416c-b2d5-65ea0623d51a name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.692819929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8737bb1a-2182-416c-b2d5-65ea0623d51a name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:32 pause-302295 crio[2801]: time="2024-08-07 19:26:32.693073579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52,PodSandboxId:95c5152c0c9f9a197009bd0e66533badc87df62e5fca23c1e4c8d279ea2f5f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058775228404825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723058775232695118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea655,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723058771377056803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723058771361193202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f
2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723058771383572759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723058771365469709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io
.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723058764120966876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea6
55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723058764253455193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723058764118791236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCoun
t: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723058764049806694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723058763981879714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79,PodSandboxId:955fab1afeba52e02f73e15f29fa06d773d51012358f12b092883da21dba9fa8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058751689676543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8737bb1a-2182-416c-b2d5-65ea0623d51a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e368f51ab1188       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   17 seconds ago      Running             kube-proxy                3                   3de702f22ee53       kube-proxy-65jsz
	f432a7c2378b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago      Running             coredns                   2                   95c5152c0c9f9       coredns-7db6d8ff4d-wt7kx
	d7a0ac0038370       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   21 seconds ago      Running             kube-scheduler            3                   f12b98b54f2e8       kube-scheduler-pause-302295
	5745e3db6bbef       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   21 seconds ago      Running             kube-controller-manager   3                   e59408271a41a       kube-controller-manager-pause-302295
	d57a991ab06e3       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 seconds ago      Running             kube-apiserver            3                   7a07fd027197d       kube-apiserver-pause-302295
	d50105e5c7b9f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago      Running             etcd                      3                   8b7d6b219ecf7       etcd-pause-302295
	bbd28aba92481       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   28 seconds ago      Exited              kube-scheduler            2                   f12b98b54f2e8       kube-scheduler-pause-302295
	96fb66de2a1e9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   28 seconds ago      Exited              kube-proxy                2                   3de702f22ee53       kube-proxy-65jsz
	b90d88fbf6c91       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   28 seconds ago      Exited              kube-apiserver            2                   7a07fd027197d       kube-apiserver-pause-302295
	abaceb4ef5b1a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   28 seconds ago      Exited              etcd                      2                   8b7d6b219ecf7       etcd-pause-302295
	8b4194ce73361       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   28 seconds ago      Exited              kube-controller-manager   2                   e59408271a41a       kube-controller-manager-pause-302295
	2bf0e87247a85       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   41 seconds ago      Exited              coredns                   1                   955fab1afeba5       coredns-7db6d8ff4d-wt7kx
	
	
	==> coredns [2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51587 - 23747 "HINFO IN 3867778170067497349.7196442350379056416. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009503063s
	
	
	==> coredns [f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35140 - 37495 "HINFO IN 8659281342019826327.3663780689352838898. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020889558s
	
	
	==> describe nodes <==
	Name:               pause-302295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-302295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=pause-302295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T19_25_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:25:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-302295
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:26:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:26:14 +0000   Wed, 07 Aug 2024 19:25:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:26:14 +0000   Wed, 07 Aug 2024 19:25:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:26:14 +0000   Wed, 07 Aug 2024 19:25:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:26:14 +0000   Wed, 07 Aug 2024 19:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.241
	  Hostname:    pause-302295
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6debab9d5be4d8ba954a1bebef70464
	  System UUID:                e6debab9-d5be-4d8b-a954-a1bebef70464
	  Boot ID:                    7e827661-e594-49cb-aeb7-87caaf3b46a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-wt7kx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     73s
	  kube-system                 etcd-pause-302295                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         86s
	  kube-system                 kube-apiserver-pause-302295             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-controller-manager-pause-302295    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-proxy-65jsz                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-scheduler-pause-302295             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 71s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  86s                kubelet          Node pause-302295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                kubelet          Node pause-302295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s                kubelet          Node pause-302295 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeReady                85s                kubelet          Node pause-302295 status is now: NodeReady
	  Normal  RegisteredNode           73s                node-controller  Node pause-302295 event: Registered Node pause-302295 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-302295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-302295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-302295 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node pause-302295 event: Registered Node pause-302295 in Controller
	
	
	==> dmesg <==
	[  +0.064878] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057114] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.193306] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.144178] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.284496] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +4.414764] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.060555] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.054387] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[Aug 7 19:25] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.880157] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +4.278352] kauditd_printk_skb: 58 callbacks suppressed
	[  +9.232937] systemd-fstab-generator[1501]: Ignoring "noauto" option for root device
	[ +30.060576] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.352530] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +0.180588] systemd-fstab-generator[2386]: Ignoring "noauto" option for root device
	[  +0.369911] systemd-fstab-generator[2548]: Ignoring "noauto" option for root device
	[  +0.283736] systemd-fstab-generator[2617]: Ignoring "noauto" option for root device
	[  +0.428498] systemd-fstab-generator[2697]: Ignoring "noauto" option for root device
	[Aug 7 19:26] systemd-fstab-generator[2992]: Ignoring "noauto" option for root device
	[  +0.101970] kauditd_printk_skb: 173 callbacks suppressed
	[  +5.229488] kauditd_printk_skb: 92 callbacks suppressed
	[  +1.817771] systemd-fstab-generator[3798]: Ignoring "noauto" option for root device
	[  +4.632094] kauditd_printk_skb: 42 callbacks suppressed
	[ +11.879943] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.298410] systemd-fstab-generator[4237]: Ignoring "noauto" option for root device
	
	
	==> etcd [abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876] <==
	{"level":"info","ts":"2024-08-07T19:26:04.838865Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e60cc1b116e52d7a","initial-advertise-peer-urls":["https://192.168.61.241:2380"],"listen-peer-urls":["https://192.168.61.241:2380"],"advertise-client-urls":["https://192.168.61.241:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.241:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T19:26:06.14117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-07T19:26:06.141222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-07T19:26:06.141269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a received MsgPreVoteResp from e60cc1b116e52d7a at term 2"}
	{"level":"info","ts":"2024-08-07T19:26:06.141283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became candidate at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:06.141289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a received MsgVoteResp from e60cc1b116e52d7a at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:06.141297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became leader at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:06.141304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e60cc1b116e52d7a elected leader e60cc1b116e52d7a at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:06.144366Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e60cc1b116e52d7a","local-member-attributes":"{Name:pause-302295 ClientURLs:[https://192.168.61.241:2379]}","request-path":"/0/members/e60cc1b116e52d7a/attributes","cluster-id":"caca2a402ef45298","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T19:26:06.144427Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:26:06.147428Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:26:06.156244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T19:26:06.156283Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T19:26:06.162789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.241:2379"}
	{"level":"info","ts":"2024-08-07T19:26:06.16606Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T19:26:09.335049Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-07T19:26:09.335173Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-302295","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.241:2380"],"advertise-client-urls":["https://192.168.61.241:2379"]}
	{"level":"warn","ts":"2024-08-07T19:26:09.335301Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:26:09.335328Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:26:09.337515Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.241:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:26:09.337545Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.241:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-07T19:26:09.337611Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e60cc1b116e52d7a","current-leader-member-id":"e60cc1b116e52d7a"}
	{"level":"info","ts":"2024-08-07T19:26:09.340761Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.241:2380"}
	{"level":"info","ts":"2024-08-07T19:26:09.340975Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.241:2380"}
	{"level":"info","ts":"2024-08-07T19:26:09.340996Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-302295","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.241:2380"],"advertise-client-urls":["https://192.168.61.241:2379"]}
	
	
	==> etcd [d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1] <==
	{"level":"info","ts":"2024-08-07T19:26:11.791156Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:26:11.791201Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:26:11.791439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a switched to configuration voters=(16576837294781443450)"}
	{"level":"info","ts":"2024-08-07T19:26:11.791514Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"caca2a402ef45298","local-member-id":"e60cc1b116e52d7a","added-peer-id":"e60cc1b116e52d7a","added-peer-peer-urls":["https://192.168.61.241:2380"]}
	{"level":"info","ts":"2024-08-07T19:26:11.791627Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"caca2a402ef45298","local-member-id":"e60cc1b116e52d7a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:26:11.792175Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:26:11.820612Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-07T19:26:11.820863Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.241:2380"}
	{"level":"info","ts":"2024-08-07T19:26:11.822161Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.241:2380"}
	{"level":"info","ts":"2024-08-07T19:26:11.821003Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e60cc1b116e52d7a","initial-advertise-peer-urls":["https://192.168.61.241:2380"],"listen-peer-urls":["https://192.168.61.241:2380"],"advertise-client-urls":["https://192.168.61.241:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.241:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T19:26:11.821027Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T19:26:13.259546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:13.259606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:13.259659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a received MsgPreVoteResp from e60cc1b116e52d7a at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:13.259671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became candidate at term 4"}
	{"level":"info","ts":"2024-08-07T19:26:13.259676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a received MsgVoteResp from e60cc1b116e52d7a at term 4"}
	{"level":"info","ts":"2024-08-07T19:26:13.259684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became leader at term 4"}
	{"level":"info","ts":"2024-08-07T19:26:13.2597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e60cc1b116e52d7a elected leader e60cc1b116e52d7a at term 4"}
	{"level":"info","ts":"2024-08-07T19:26:13.265896Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e60cc1b116e52d7a","local-member-attributes":"{Name:pause-302295 ClientURLs:[https://192.168.61.241:2379]}","request-path":"/0/members/e60cc1b116e52d7a/attributes","cluster-id":"caca2a402ef45298","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T19:26:13.265965Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:26:13.26591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:26:13.266577Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T19:26:13.266609Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T19:26:13.268838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T19:26:13.281931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.241:2379"}
	
	
	==> kernel <==
	 19:26:33 up 2 min,  0 users,  load average: 0.86, 0.32, 0.11
	Linux pause-302295 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955] <==
	I0807 19:26:08.200642       1 controller.go:167] Shutting down OpenAPI controller
	I0807 19:26:08.200651       1 available_controller.go:439] Shutting down AvailableConditionController
	I0807 19:26:08.200670       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0807 19:26:08.200696       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0807 19:26:08.200706       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0807 19:26:08.200719       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0807 19:26:08.200730       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0807 19:26:08.200997       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0807 19:26:08.201443       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 19:26:08.201560       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0807 19:26:08.201597       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 19:26:08.201690       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0807 19:26:08.201717       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0807 19:26:08.202358       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0807 19:26:08.202571       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0807 19:26:08.204386       1 controller.go:157] Shutting down quota evaluator
	I0807 19:26:08.204436       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.204757       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.204797       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.204824       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.204858       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.205008       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0807 19:26:08.207204       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0807 19:26:08.778033       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0807 19:26:08.778511       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	
	==> kube-apiserver [d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed] <==
	I0807 19:26:14.530291       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 19:26:14.579318       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0807 19:26:14.580488       1 aggregator.go:165] initial CRD sync complete...
	I0807 19:26:14.580548       1 autoregister_controller.go:141] Starting autoregister controller
	I0807 19:26:14.580573       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0807 19:26:14.639342       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 19:26:14.639382       1 policy_source.go:224] refreshing policies
	I0807 19:26:14.639569       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0807 19:26:14.673005       1 shared_informer.go:320] Caches are synced for configmaps
	I0807 19:26:14.673152       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0807 19:26:14.673160       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0807 19:26:14.677310       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 19:26:14.683206       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0807 19:26:14.683632       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0807 19:26:14.683722       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0807 19:26:14.684174       1 cache.go:39] Caches are synced for autoregister controller
	I0807 19:26:14.684527       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 19:26:15.480845       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0807 19:26:16.194507       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 19:26:16.218211       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 19:26:16.265880       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 19:26:16.308244       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0807 19:26:16.315370       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0807 19:26:27.161280       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 19:26:27.167931       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f] <==
	I0807 19:26:27.050201       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0807 19:26:27.059328       1 shared_informer.go:320] Caches are synced for deployment
	I0807 19:26:27.063975       1 shared_informer.go:320] Caches are synced for persistent volume
	I0807 19:26:27.065161       1 shared_informer.go:320] Caches are synced for HPA
	I0807 19:26:27.066376       1 shared_informer.go:320] Caches are synced for expand
	I0807 19:26:27.069479       1 shared_informer.go:320] Caches are synced for stateful set
	I0807 19:26:27.070769       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0807 19:26:27.072770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.927868ms"
	I0807 19:26:27.073673       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.349µs"
	I0807 19:26:27.073742       1 shared_informer.go:320] Caches are synced for PV protection
	I0807 19:26:27.094597       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0807 19:26:27.097188       1 shared_informer.go:320] Caches are synced for GC
	I0807 19:26:27.099722       1 shared_informer.go:320] Caches are synced for cronjob
	I0807 19:26:27.114019       1 shared_informer.go:320] Caches are synced for PVC protection
	I0807 19:26:27.151173       1 shared_informer.go:320] Caches are synced for endpoint
	I0807 19:26:27.156308       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0807 19:26:27.186395       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 19:26:27.219923       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 19:26:27.224795       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0807 19:26:27.230851       1 shared_informer.go:320] Caches are synced for attach detach
	I0807 19:26:27.261480       1 shared_informer.go:320] Caches are synced for namespace
	I0807 19:26:27.300660       1 shared_informer.go:320] Caches are synced for service account
	I0807 19:26:27.720032       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 19:26:27.736411       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 19:26:27.736676       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91] <==
	I0807 19:26:05.849670       1 serving.go:380] Generated self-signed cert in-memory
	I0807 19:26:06.573837       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0807 19:26:06.576172       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:06.577729       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0807 19:26:06.578290       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 19:26:06.578383       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 19:26:06.578507       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3] <==
	I0807 19:26:06.062298       1 server_linux.go:69] "Using iptables proxy"
	I0807 19:26:07.908651       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.241"]
	I0807 19:26:07.945082       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 19:26:07.945194       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 19:26:07.945211       1 server_linux.go:165] "Using iptables Proxier"
	I0807 19:26:07.947771       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 19:26:07.947995       1 server.go:872] "Version info" version="v1.30.3"
	I0807 19:26:07.948022       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:07.949216       1 config.go:192] "Starting service config controller"
	I0807 19:26:07.949252       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 19:26:07.949302       1 config.go:101] "Starting endpoint slice config controller"
	I0807 19:26:07.949322       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 19:26:07.949853       1 config.go:319] "Starting node config controller"
	I0807 19:26:07.949882       1 shared_informer.go:313] Waiting for caches to sync for node config
	
	
	==> kube-proxy [e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8] <==
	I0807 19:26:15.427193       1 server_linux.go:69] "Using iptables proxy"
	I0807 19:26:15.452992       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.241"]
	I0807 19:26:15.505423       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 19:26:15.505499       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 19:26:15.505524       1 server_linux.go:165] "Using iptables Proxier"
	I0807 19:26:15.509646       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 19:26:15.510246       1 server.go:872] "Version info" version="v1.30.3"
	I0807 19:26:15.510297       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:15.511419       1 config.go:192] "Starting service config controller"
	I0807 19:26:15.511471       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 19:26:15.511506       1 config.go:101] "Starting endpoint slice config controller"
	I0807 19:26:15.511521       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 19:26:15.512066       1 config.go:319] "Starting node config controller"
	I0807 19:26:15.514887       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 19:26:15.611894       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 19:26:15.612031       1 shared_informer.go:320] Caches are synced for service config
	I0807 19:26:15.616263       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d] <==
	I0807 19:26:05.914954       1 serving.go:380] Generated self-signed cert in-memory
	W0807 19:26:07.841445       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 19:26:07.841546       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 19:26:07.841571       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 19:26:07.841578       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 19:26:07.898232       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0807 19:26:07.898268       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:07.900509       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0807 19:26:07.900588       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0807 19:26:07.900767       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188] <==
	I0807 19:26:12.472684       1 serving.go:380] Generated self-signed cert in-memory
	W0807 19:26:14.586474       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 19:26:14.588996       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 19:26:14.589162       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 19:26:14.589191       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 19:26:14.620955       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0807 19:26:14.621070       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:14.622722       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0807 19:26:14.624208       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0807 19:26:14.625296       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:26:14.624222       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 19:26:14.725580       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.123145    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c95d2d175eb4c58d8aa8e679da35def3-usr-share-ca-certificates\") pod \"kube-apiserver-pause-302295\" (UID: \"c95d2d175eb4c58d8aa8e679da35def3\") " pod="kube-system/kube-apiserver-pause-302295"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: E0807 19:26:11.124029    3805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-302295?timeout=10s\": dial tcp 192.168.61.241:8443: connect: connection refused" interval="400ms"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.221159    3805 kubelet_node_status.go:73] "Attempting to register node" node="pause-302295"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: E0807 19:26:11.222240    3805 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.241:8443: connect: connection refused" node="pause-302295"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.345388    3805 scope.go:117] "RemoveContainer" containerID="abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.347911    3805 scope.go:117] "RemoveContainer" containerID="b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.349199    3805 scope.go:117] "RemoveContainer" containerID="8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.350290    3805 scope.go:117] "RemoveContainer" containerID="bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: E0807 19:26:11.525606    3805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-302295?timeout=10s\": dial tcp 192.168.61.241:8443: connect: connection refused" interval="800ms"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.626129    3805 kubelet_node_status.go:73] "Attempting to register node" node="pause-302295"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: E0807 19:26:11.626980    3805 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.241:8443: connect: connection refused" node="pause-302295"
	Aug 07 19:26:12 pause-302295 kubelet[3805]: I0807 19:26:12.428498    3805 kubelet_node_status.go:73] "Attempting to register node" node="pause-302295"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.761792    3805 kubelet_node_status.go:112] "Node was previously registered" node="pause-302295"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.762286    3805 kubelet_node_status.go:76] "Successfully registered node" node="pause-302295"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.763912    3805 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.764972    3805 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.906332    3805 apiserver.go:52] "Watching apiserver"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.908849    3805 topology_manager.go:215] "Topology Admit Handler" podUID="42922f3b-0eea-408a-90a7-679748a29fb0" podNamespace="kube-system" podName="kube-proxy-65jsz"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.908969    3805 topology_manager.go:215] "Topology Admit Handler" podUID="edd1b471-d406-425e-88b2-3a60d3a2dd2e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wt7kx"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.919012    3805 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.991456    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42922f3b-0eea-408a-90a7-679748a29fb0-xtables-lock\") pod \"kube-proxy-65jsz\" (UID: \"42922f3b-0eea-408a-90a7-679748a29fb0\") " pod="kube-system/kube-proxy-65jsz"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.991560    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42922f3b-0eea-408a-90a7-679748a29fb0-lib-modules\") pod \"kube-proxy-65jsz\" (UID: \"42922f3b-0eea-408a-90a7-679748a29fb0\") " pod="kube-system/kube-proxy-65jsz"
	Aug 07 19:26:15 pause-302295 kubelet[3805]: I0807 19:26:15.209863    3805 scope.go:117] "RemoveContainer" containerID="2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79"
	Aug 07 19:26:15 pause-302295 kubelet[3805]: I0807 19:26:15.210655    3805 scope.go:117] "RemoveContainer" containerID="96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3"
	Aug 07 19:26:24 pause-302295 kubelet[3805]: I0807 19:26:24.086025    3805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 19:26:32.262384   76663 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19389-20864/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-302295 -n pause-302295
helpers_test.go:261: (dbg) Run:  kubectl --context pause-302295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-302295 -n pause-302295
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-302295 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-302295 logs -n 25: (1.443020385s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo cat              | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo cat              | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo                  | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo find             | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-853483 sudo crio             | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-853483                       | cilium-853483             | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:24 UTC |
	| start   | -p pause-302295 --memory=2048          | pause-302295              | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:25 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-252907              | running-upgrade-252907    | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:24 UTC |
	| start   | -p cert-options-405893                 | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:25 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-493959            | force-systemd-env-493959  | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:24 UTC |
	| start   | -p force-systemd-flag-992969           | force-systemd-flag-992969 | jenkins | v1.33.1 | 07 Aug 24 19:24 UTC | 07 Aug 24 19:26 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-302295                        | pause-302295              | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:26 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-405893 ssh                | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-405893 -- sudo         | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-405893                 | cert-options-405893       | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	| start   | -p cert-expiration-260571              | cert-expiration-260571    | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-235652           | kubernetes-upgrade-235652 | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC | 07 Aug 24 19:25 UTC |
	| start   | -p kubernetes-upgrade-235652           | kubernetes-upgrade-235652 | jenkins | v1.33.1 | 07 Aug 24 19:25 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-992969 ssh cat      | force-systemd-flag-992969 | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC | 07 Aug 24 19:26 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-992969           | force-systemd-flag-992969 | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC | 07 Aug 24 19:26 UTC |
	| start   | -p auto-853483 --memory=3072           | auto-853483               | jenkins | v1.33.1 | 07 Aug 24 19:26 UTC |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 19:26:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 19:26:03.435917   76375 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:26:03.436281   76375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:26:03.436311   76375 out.go:304] Setting ErrFile to fd 2...
	I0807 19:26:03.436322   76375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:26:03.436647   76375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 19:26:03.437513   76375 out.go:298] Setting JSON to false
	I0807 19:26:03.438855   76375 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11309,"bootTime":1723047454,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 19:26:03.438946   76375 start.go:139] virtualization: kvm guest
	I0807 19:26:03.441279   76375 out.go:177] * [auto-853483] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 19:26:03.442669   76375 notify.go:220] Checking for updates...
	I0807 19:26:03.442684   76375 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 19:26:03.443935   76375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 19:26:03.445256   76375 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 19:26:03.446411   76375 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 19:26:03.447586   76375 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 19:26:03.448935   76375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 19:26:03.450902   76375 config.go:182] Loaded profile config "cert-expiration-260571": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:26:03.451036   76375 config.go:182] Loaded profile config "kubernetes-upgrade-235652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0807 19:26:03.451241   76375 config.go:182] Loaded profile config "pause-302295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:26:03.451367   76375 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 19:26:03.492939   76375 out.go:177] * Using the kvm2 driver based on user configuration
	I0807 19:26:03.494059   76375 start.go:297] selected driver: kvm2
	I0807 19:26:03.494071   76375 start.go:901] validating driver "kvm2" against <nil>
	I0807 19:26:03.494083   76375 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 19:26:03.494860   76375 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:26:03.494956   76375 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 19:26:03.511045   76375 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 19:26:03.511130   76375 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 19:26:03.511433   76375 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0807 19:26:03.511474   76375 cni.go:84] Creating CNI manager for ""
	I0807 19:26:03.511485   76375 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:26:03.511495   76375 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 19:26:03.511604   76375 start.go:340] cluster config:
	{Name:auto-853483 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-853483 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:26:03.511738   76375 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 19:26:03.513604   76375 out.go:177] * Starting "auto-853483" primary control-plane node in "auto-853483" cluster
	I0807 19:26:02.712307   75607 main.go:141] libmachine: (pause-302295) Calling .GetIP
	I0807 19:26:03.224740   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:26:03.225122   75607 main.go:141] libmachine: (pause-302295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:95:b2", ip: ""} in network mk-pause-302295: {Iface:virbr1 ExpiryTime:2024-08-07 20:24:41 +0000 UTC Type:0 Mac:52:54:00:bc:95:b2 Iaid: IPaddr:192.168.61.241 Prefix:24 Hostname:pause-302295 Clientid:01:52:54:00:bc:95:b2}
	I0807 19:26:03.225150   75607 main.go:141] libmachine: (pause-302295) DBG | domain pause-302295 has defined IP address 192.168.61.241 and MAC address 52:54:00:bc:95:b2 in network mk-pause-302295
	I0807 19:26:03.225337   75607 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0807 19:26:03.230334   75607 kubeadm.go:883] updating cluster {Name:pause-302295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0807 19:26:03.230466   75607 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:26:03.230603   75607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:26:03.277296   75607 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:26:03.277328   75607 crio.go:433] Images already preloaded, skipping extraction
	I0807 19:26:03.277385   75607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0807 19:26:03.321611   75607 crio.go:514] all images are preloaded for cri-o runtime.
	I0807 19:26:03.321638   75607 cache_images.go:84] Images are preloaded, skipping loading
	I0807 19:26:03.321647   75607 kubeadm.go:934] updating node { 192.168.61.241 8443 v1.30.3 crio true true} ...
	I0807 19:26:03.321790   75607 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-302295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0807 19:26:03.321880   75607 ssh_runner.go:195] Run: crio config
	I0807 19:26:03.381852   75607 cni.go:84] Creating CNI manager for ""
	I0807 19:26:03.381875   75607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 19:26:03.381890   75607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0807 19:26:03.381911   75607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.241 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-302295 NodeName:pause-302295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0807 19:26:03.382102   75607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-302295"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.241"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0807 19:26:03.382170   75607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0807 19:26:03.392920   75607 binaries.go:44] Found k8s binaries, skipping transfer
	I0807 19:26:03.392997   75607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0807 19:26:03.404763   75607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0807 19:26:03.425460   75607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0807 19:26:03.448959   75607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0807 19:26:03.469750   75607 ssh_runner.go:195] Run: grep 192.168.61.241	control-plane.minikube.internal$ /etc/hosts
	I0807 19:26:03.475140   75607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0807 19:26:03.643968   75607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0807 19:26:03.691717   75607 certs.go:68] Setting up /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295 for IP: 192.168.61.241
	I0807 19:26:03.691744   75607 certs.go:194] generating shared ca certs ...
	I0807 19:26:03.691764   75607 certs.go:226] acquiring lock for ca certs: {Name:mkee954258064273498764506faba6feea3b6003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:26:03.691967   75607 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key
	I0807 19:26:03.692024   75607 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key
	I0807 19:26:03.692037   75607 certs.go:256] generating profile certs ...
	I0807 19:26:03.692157   75607 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/client.key
	I0807 19:26:03.692267   75607 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/apiserver.key.6b5e59d7
	I0807 19:26:03.692332   75607 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/proxy-client.key
	I0807 19:26:03.692494   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem (1338 bytes)
	W0807 19:26:03.692538   75607 certs.go:480] ignoring /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052_empty.pem, impossibly tiny 0 bytes
	I0807 19:26:03.692548   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca-key.pem (1679 bytes)
	I0807 19:26:03.692577   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/ca.pem (1082 bytes)
	I0807 19:26:03.692602   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/cert.pem (1123 bytes)
	I0807 19:26:03.692625   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/certs/key.pem (1679 bytes)
	I0807 19:26:03.692661   75607 certs.go:484] found cert: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem (1708 bytes)
	I0807 19:26:03.693801   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0807 19:26:03.791976   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0807 19:26:03.950489   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0807 19:26:04.049661   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0807 19:26:04.186457   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0807 19:26:02.568229   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | domain cert-expiration-260571 has defined MAC address 52:54:00:5d:7b:38 in network mk-cert-expiration-260571
	I0807 19:26:02.630374   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | unable to find current IP address of domain cert-expiration-260571 in network mk-cert-expiration-260571
	I0807 19:26:02.630394   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | I0807 19:26:02.630259   76086 retry.go:31] will retry after 3.386305406s: waiting for machine to come up
	I0807 19:26:06.018338   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | domain cert-expiration-260571 has defined MAC address 52:54:00:5d:7b:38 in network mk-cert-expiration-260571
	I0807 19:26:06.018823   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | unable to find current IP address of domain cert-expiration-260571 in network mk-cert-expiration-260571
	I0807 19:26:06.018844   75881 main.go:141] libmachine: (cert-expiration-260571) DBG | I0807 19:26:06.018775   76086 retry.go:31] will retry after 2.985033846s: waiting for machine to come up
	I0807 19:26:03.514823   76375 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 19:26:03.514865   76375 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 19:26:03.514880   76375 cache.go:56] Caching tarball of preloaded images
	I0807 19:26:03.514989   76375 preload.go:172] Found /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0807 19:26:03.515004   76375 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0807 19:26:03.515124   76375 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/config.json ...
	I0807 19:26:03.515154   76375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/auto-853483/config.json: {Name:mk35c2e692d3cd2487cad8614e499b7b37f334e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 19:26:03.515323   76375 start.go:360] acquireMachinesLock for auto-853483: {Name:mk247a56355bd763fa3061d99f6a9ceb3bbb34dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0807 19:26:04.426426   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0807 19:26:04.495097   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0807 19:26:04.556999   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/pause-302295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0807 19:26:04.630846   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/ssl/certs/280522.pem --> /usr/share/ca-certificates/280522.pem (1708 bytes)
	I0807 19:26:04.740723   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0807 19:26:04.839089   75607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19389-20864/.minikube/certs/28052.pem --> /usr/share/ca-certificates/28052.pem (1338 bytes)
	I0807 19:26:04.877778   75607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0807 19:26:04.898263   75607 ssh_runner.go:195] Run: openssl version
	I0807 19:26:04.904545   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/280522.pem && ln -fs /usr/share/ca-certificates/280522.pem /etc/ssl/certs/280522.pem"
	I0807 19:26:04.916694   75607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/280522.pem
	I0807 19:26:04.921438   75607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  7 18:17 /usr/share/ca-certificates/280522.pem
	I0807 19:26:04.921488   75607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/280522.pem
	I0807 19:26:04.927075   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/280522.pem /etc/ssl/certs/3ec20f2e.0"
	I0807 19:26:04.939585   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0807 19:26:04.953004   75607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:26:04.958001   75607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  7 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:26:04.958074   75607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0807 19:26:04.964084   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0807 19:26:04.981599   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28052.pem && ln -fs /usr/share/ca-certificates/28052.pem /etc/ssl/certs/28052.pem"
	I0807 19:26:05.011586   75607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28052.pem
	I0807 19:26:05.016774   75607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  7 18:17 /usr/share/ca-certificates/28052.pem
	I0807 19:26:05.016845   75607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28052.pem
	I0807 19:26:05.025524   75607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/28052.pem /etc/ssl/certs/51391683.0"
	I0807 19:26:05.038771   75607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0807 19:26:05.043432   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0807 19:26:05.051518   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0807 19:26:05.058464   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0807 19:26:05.066197   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0807 19:26:05.071751   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0807 19:26:05.078516   75607 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0807 19:26:05.086322   75607 kubeadm.go:392] StartCluster: {Name:pause-302295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-302295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 19:26:05.086469   75607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0807 19:26:05.086549   75607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0807 19:26:05.145023   75607 cri.go:89] found id: "bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d"
	I0807 19:26:05.145049   75607 cri.go:89] found id: "96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3"
	I0807 19:26:05.145054   75607 cri.go:89] found id: "b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955"
	I0807 19:26:05.145059   75607 cri.go:89] found id: "abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876"
	I0807 19:26:05.145063   75607 cri.go:89] found id: "8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91"
	I0807 19:26:05.145067   75607 cri.go:89] found id: "2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79"
	I0807 19:26:05.145071   75607 cri.go:89] found id: "c204c8d69ed7fc61e972cd8cd369ba304873a7e82aebfbbd272e6c255d7b2dac"
	I0807 19:26:05.145075   75607 cri.go:89] found id: "3c036b1106ca4f92d2d108bffc32c6b42a8557ed77c520f8aa8271f8febb2aba"
	I0807 19:26:05.145078   75607 cri.go:89] found id: "707f2136588365e52be0d52c2206d61e9573762ca3bf91c260fbb0faae2208ef"
	I0807 19:26:05.145095   75607 cri.go:89] found id: "c157405d56a05550fbdc4090412abe258b9c454e17e1853e4426bfa199feff54"
	I0807 19:26:05.145099   75607 cri.go:89] found id: "36d4d11bec1762a447ed6a0dde886a8509f446c7e9d2a88f4a92c6ca5565446b"
	I0807 19:26:05.145102   75607 cri.go:89] found id: ""
	I0807 19:26:05.145154   75607 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.633503168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058794633465768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bec8ed86-5be3-4a23-afd1-4fab2454e117 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.634528428Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17ad2fa4-f408-41c0-9131-18d1fd088e62 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.634604393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17ad2fa4-f408-41c0-9131-18d1fd088e62 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.634894672Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52,PodSandboxId:95c5152c0c9f9a197009bd0e66533badc87df62e5fca23c1e4c8d279ea2f5f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058775228404825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723058775232695118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea655,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723058771377056803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723058771361193202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f
2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723058771383572759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723058771365469709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io
.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723058764120966876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea6
55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723058764253455193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723058764118791236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCoun
t: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723058764049806694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723058763981879714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79,PodSandboxId:955fab1afeba52e02f73e15f29fa06d773d51012358f12b092883da21dba9fa8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058751689676543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17ad2fa4-f408-41c0-9131-18d1fd088e62 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.679042584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7d36ed1-39f7-466e-ab03-14dd5fc85420 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.679174108Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7d36ed1-39f7-466e-ab03-14dd5fc85420 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.680429203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0114cc17-4dbe-44e8-8912-5853ad059915 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.681490026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058794681464641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0114cc17-4dbe-44e8-8912-5853ad059915 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.682239142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=483e83c3-241a-4618-b252-ec6996bea0ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.682296465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=483e83c3-241a-4618-b252-ec6996bea0ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.682529884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52,PodSandboxId:95c5152c0c9f9a197009bd0e66533badc87df62e5fca23c1e4c8d279ea2f5f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058775228404825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723058775232695118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea655,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723058771377056803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723058771361193202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f
2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723058771383572759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723058771365469709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io
.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723058764120966876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea6
55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723058764253455193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723058764118791236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCoun
t: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723058764049806694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723058763981879714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79,PodSandboxId:955fab1afeba52e02f73e15f29fa06d773d51012358f12b092883da21dba9fa8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058751689676543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=483e83c3-241a-4618-b252-ec6996bea0ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.730444138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e00ee7fb-9458-44dd-84f6-7856db2d2305 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.730657560Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e00ee7fb-9458-44dd-84f6-7856db2d2305 name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.732168622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da73dbd1-92d3-43a3-8575-abb7d7946035 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.732557442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058794732528401,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da73dbd1-92d3-43a3-8575-abb7d7946035 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.733431060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fd2b953-ab96-4618-94c4-3725911e2dda name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.733499175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fd2b953-ab96-4618-94c4-3725911e2dda name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.733877908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52,PodSandboxId:95c5152c0c9f9a197009bd0e66533badc87df62e5fca23c1e4c8d279ea2f5f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058775228404825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723058775232695118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea655,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723058771377056803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723058771361193202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f
2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723058771383572759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723058771365469709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io
.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723058764120966876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea6
55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723058764253455193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723058764118791236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCoun
t: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723058764049806694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723058763981879714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79,PodSandboxId:955fab1afeba52e02f73e15f29fa06d773d51012358f12b092883da21dba9fa8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058751689676543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fd2b953-ab96-4618-94c4-3725911e2dda name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.784830626Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb2379bc-2cf9-4ad6-ab8a-b1f4d483deae name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.784930942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb2379bc-2cf9-4ad6-ab8a-b1f4d483deae name=/runtime.v1.RuntimeService/Version
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.787185481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1503382-829d-42ea-bed5-8d957b7b109a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.787722874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723058794787688804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1503382-829d-42ea-bed5-8d957b7b109a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.788692089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be379259-6e23-41fd-bbda-06ff970cddc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.788774153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be379259-6e23-41fd-bbda-06ff970cddc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 07 19:26:34 pause-302295 crio[2801]: time="2024-08-07 19:26:34.789154254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52,PodSandboxId:95c5152c0c9f9a197009bd0e66533badc87df62e5fca23c1e4c8d279ea2f5f3a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723058775228404825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1723058775232695118,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea655,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1723058771377056803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1723058771361193202,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f
2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1723058771383572759,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1723058771365469709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io
.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3,PodSandboxId:3de702f22ee5384dfbd208b6500528b6097fe416aede939f02cd7694bca6cb1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1723058764120966876,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-65jsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42922f3b-0eea-408a-90a7-679748a29fb0,},Annotations:map[string]string{io.kubernetes.container.hash: f5dea6
55,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d,PodSandboxId:f12b98b54f2e813511533192df9c068f114cf20323d2cfdb989f75a422ba7287,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1723058764253455193,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3d33301b1d4016ce6724fc66ebf5dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955,PodSandboxId:7a07fd027197dd47a1ff97d387bef45bbc982318bc3d5db712175cbec6c0d584,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1723058764118791236,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c95d2d175eb4c58d8aa8e679da35def3,},Annotations:map[string]string{io.kubernetes.container.hash: c3a629c3,io.kubernetes.container.restartCoun
t: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876,PodSandboxId:8b7d6b219ecf7139b55e120b4a31d054b1824fc556e37dbe2549bacd0e75aea0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1723058764049806694,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb2aed8c6d3bb2b8ec6ea43a46d383f2,},Annotations:map[string]string{io.kubernetes.container.hash: 65fa513a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91,PodSandboxId:e59408271a41a558e3bfb413e18923f66181496f09a42e10345a62ccd0d50b88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1723058763981879714,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-302295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 556aaea724929057b03a8a31b6107959,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79,PodSandboxId:955fab1afeba52e02f73e15f29fa06d773d51012358f12b092883da21dba9fa8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723058751689676543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wt7kx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edd1b471-d406-425e-88b2-3a60d3a2dd2e,},Annotations:map[string]string{io.kubernetes.container.hash: 239f3c45,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be379259-6e23-41fd-bbda-06ff970cddc5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e368f51ab1188       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago      Running             kube-proxy                3                   3de702f22ee53       kube-proxy-65jsz
	f432a7c2378b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   95c5152c0c9f9       coredns-7db6d8ff4d-wt7kx
	d7a0ac0038370       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   23 seconds ago      Running             kube-scheduler            3                   f12b98b54f2e8       kube-scheduler-pause-302295
	5745e3db6bbef       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   23 seconds ago      Running             kube-controller-manager   3                   e59408271a41a       kube-controller-manager-pause-302295
	d57a991ab06e3       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   23 seconds ago      Running             kube-apiserver            3                   7a07fd027197d       kube-apiserver-pause-302295
	d50105e5c7b9f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      3                   8b7d6b219ecf7       etcd-pause-302295
	bbd28aba92481       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   30 seconds ago      Exited              kube-scheduler            2                   f12b98b54f2e8       kube-scheduler-pause-302295
	96fb66de2a1e9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   30 seconds ago      Exited              kube-proxy                2                   3de702f22ee53       kube-proxy-65jsz
	b90d88fbf6c91       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   30 seconds ago      Exited              kube-apiserver            2                   7a07fd027197d       kube-apiserver-pause-302295
	abaceb4ef5b1a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago      Exited              etcd                      2                   8b7d6b219ecf7       etcd-pause-302295
	8b4194ce73361       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   30 seconds ago      Exited              kube-controller-manager   2                   e59408271a41a       kube-controller-manager-pause-302295
	2bf0e87247a85       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   1                   955fab1afeba5       coredns-7db6d8ff4d-wt7kx
	
	
	==> coredns [2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51587 - 23747 "HINFO IN 3867778170067497349.7196442350379056416. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009503063s
	
	
	==> coredns [f432a7c2378b5719533500bb43cd3a6185714375836c2334abfd4ec10eacfe52] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35140 - 37495 "HINFO IN 8659281342019826327.3663780689352838898. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020889558s
	
	
	==> describe nodes <==
	Name:               pause-302295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-302295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0ee3e7d60a6f6702c6c553e4aadfb0f66d72da6e
	                    minikube.k8s.io/name=pause-302295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_07T19_25_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 07 Aug 2024 19:25:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-302295
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 07 Aug 2024 19:26:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 07 Aug 2024 19:26:14 +0000   Wed, 07 Aug 2024 19:25:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 07 Aug 2024 19:26:14 +0000   Wed, 07 Aug 2024 19:25:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 07 Aug 2024 19:26:14 +0000   Wed, 07 Aug 2024 19:25:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 07 Aug 2024 19:26:14 +0000   Wed, 07 Aug 2024 19:25:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.241
	  Hostname:    pause-302295
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 e6debab9d5be4d8ba954a1bebef70464
	  System UUID:                e6debab9-d5be-4d8b-a954-a1bebef70464
	  Boot ID:                    7e827661-e594-49cb-aeb7-87caaf3b46a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-wt7kx                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     75s
	  kube-system                 etcd-pause-302295                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-pause-302295             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-pause-302295    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-65jsz                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-scheduler-pause-302295             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 73s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  88s                kubelet          Node pause-302295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s                kubelet          Node pause-302295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s                kubelet          Node pause-302295 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeReady                87s                kubelet          Node pause-302295 status is now: NodeReady
	  Normal  RegisteredNode           75s                node-controller  Node pause-302295 event: Registered Node pause-302295 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-302295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-302295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-302295 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-302295 event: Registered Node pause-302295 in Controller
	
	
	==> dmesg <==
	[  +0.064878] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057114] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.193306] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.144178] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.284496] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +4.414764] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.060555] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.054387] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[Aug 7 19:25] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.880157] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +4.278352] kauditd_printk_skb: 58 callbacks suppressed
	[  +9.232937] systemd-fstab-generator[1501]: Ignoring "noauto" option for root device
	[ +30.060576] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.352530] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +0.180588] systemd-fstab-generator[2386]: Ignoring "noauto" option for root device
	[  +0.369911] systemd-fstab-generator[2548]: Ignoring "noauto" option for root device
	[  +0.283736] systemd-fstab-generator[2617]: Ignoring "noauto" option for root device
	[  +0.428498] systemd-fstab-generator[2697]: Ignoring "noauto" option for root device
	[Aug 7 19:26] systemd-fstab-generator[2992]: Ignoring "noauto" option for root device
	[  +0.101970] kauditd_printk_skb: 173 callbacks suppressed
	[  +5.229488] kauditd_printk_skb: 92 callbacks suppressed
	[  +1.817771] systemd-fstab-generator[3798]: Ignoring "noauto" option for root device
	[  +4.632094] kauditd_printk_skb: 42 callbacks suppressed
	[ +11.879943] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.298410] systemd-fstab-generator[4237]: Ignoring "noauto" option for root device
	
	
	==> etcd [abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876] <==
	{"level":"info","ts":"2024-08-07T19:26:04.838865Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e60cc1b116e52d7a","initial-advertise-peer-urls":["https://192.168.61.241:2380"],"listen-peer-urls":["https://192.168.61.241:2380"],"advertise-client-urls":["https://192.168.61.241:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.241:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T19:26:06.14117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-07T19:26:06.141222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-07T19:26:06.141269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a received MsgPreVoteResp from e60cc1b116e52d7a at term 2"}
	{"level":"info","ts":"2024-08-07T19:26:06.141283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became candidate at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:06.141289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a received MsgVoteResp from e60cc1b116e52d7a at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:06.141297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became leader at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:06.141304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e60cc1b116e52d7a elected leader e60cc1b116e52d7a at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:06.144366Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e60cc1b116e52d7a","local-member-attributes":"{Name:pause-302295 ClientURLs:[https://192.168.61.241:2379]}","request-path":"/0/members/e60cc1b116e52d7a/attributes","cluster-id":"caca2a402ef45298","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T19:26:06.144427Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:26:06.147428Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:26:06.156244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T19:26:06.156283Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T19:26:06.162789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.241:2379"}
	{"level":"info","ts":"2024-08-07T19:26:06.16606Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T19:26:09.335049Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-07T19:26:09.335173Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-302295","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.241:2380"],"advertise-client-urls":["https://192.168.61.241:2379"]}
	{"level":"warn","ts":"2024-08-07T19:26:09.335301Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:26:09.335328Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:26:09.337515Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.241:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-07T19:26:09.337545Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.241:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-07T19:26:09.337611Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e60cc1b116e52d7a","current-leader-member-id":"e60cc1b116e52d7a"}
	{"level":"info","ts":"2024-08-07T19:26:09.340761Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.241:2380"}
	{"level":"info","ts":"2024-08-07T19:26:09.340975Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.241:2380"}
	{"level":"info","ts":"2024-08-07T19:26:09.340996Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-302295","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.241:2380"],"advertise-client-urls":["https://192.168.61.241:2379"]}
	
	
	==> etcd [d50105e5c7b9ff2bb40cf60444152e581921c86c3ebe79d06f602b03e84403c1] <==
	{"level":"info","ts":"2024-08-07T19:26:11.791156Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:26:11.791201Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-07T19:26:11.791439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a switched to configuration voters=(16576837294781443450)"}
	{"level":"info","ts":"2024-08-07T19:26:11.791514Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"caca2a402ef45298","local-member-id":"e60cc1b116e52d7a","added-peer-id":"e60cc1b116e52d7a","added-peer-peer-urls":["https://192.168.61.241:2380"]}
	{"level":"info","ts":"2024-08-07T19:26:11.791627Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"caca2a402ef45298","local-member-id":"e60cc1b116e52d7a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:26:11.792175Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-07T19:26:11.820612Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-07T19:26:11.820863Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.241:2380"}
	{"level":"info","ts":"2024-08-07T19:26:11.822161Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.241:2380"}
	{"level":"info","ts":"2024-08-07T19:26:11.821003Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e60cc1b116e52d7a","initial-advertise-peer-urls":["https://192.168.61.241:2380"],"listen-peer-urls":["https://192.168.61.241:2380"],"advertise-client-urls":["https://192.168.61.241:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.241:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-07T19:26:11.821027Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-07T19:26:13.259546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:13.259606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:13.259659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a received MsgPreVoteResp from e60cc1b116e52d7a at term 3"}
	{"level":"info","ts":"2024-08-07T19:26:13.259671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became candidate at term 4"}
	{"level":"info","ts":"2024-08-07T19:26:13.259676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a received MsgVoteResp from e60cc1b116e52d7a at term 4"}
	{"level":"info","ts":"2024-08-07T19:26:13.259684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e60cc1b116e52d7a became leader at term 4"}
	{"level":"info","ts":"2024-08-07T19:26:13.2597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e60cc1b116e52d7a elected leader e60cc1b116e52d7a at term 4"}
	{"level":"info","ts":"2024-08-07T19:26:13.265896Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e60cc1b116e52d7a","local-member-attributes":"{Name:pause-302295 ClientURLs:[https://192.168.61.241:2379]}","request-path":"/0/members/e60cc1b116e52d7a/attributes","cluster-id":"caca2a402ef45298","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-07T19:26:13.265965Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:26:13.26591Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-07T19:26:13.266577Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-07T19:26:13.266609Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-07T19:26:13.268838Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-07T19:26:13.281931Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.241:2379"}
	
	
	==> kernel <==
	 19:26:35 up 2 min,  0 users,  load average: 0.95, 0.34, 0.12
	Linux pause-302295 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955] <==
	I0807 19:26:08.200642       1 controller.go:167] Shutting down OpenAPI controller
	I0807 19:26:08.200651       1 available_controller.go:439] Shutting down AvailableConditionController
	I0807 19:26:08.200670       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0807 19:26:08.200696       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0807 19:26:08.200706       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0807 19:26:08.200719       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0807 19:26:08.200730       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0807 19:26:08.200997       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0807 19:26:08.201443       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 19:26:08.201560       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0807 19:26:08.201597       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 19:26:08.201690       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0807 19:26:08.201717       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0807 19:26:08.202358       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0807 19:26:08.202571       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0807 19:26:08.204386       1 controller.go:157] Shutting down quota evaluator
	I0807 19:26:08.204436       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.204757       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.204797       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.204824       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.204858       1 controller.go:176] quota evaluator worker shutdown
	I0807 19:26:08.205008       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0807 19:26:08.207204       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0807 19:26:08.778033       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0807 19:26:08.778511       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	
	==> kube-apiserver [d57a991ab06e391008da74529c56b1552b96ae4828de7d308c06d3352d187fed] <==
	I0807 19:26:14.530291       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 19:26:14.579318       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0807 19:26:14.580488       1 aggregator.go:165] initial CRD sync complete...
	I0807 19:26:14.580548       1 autoregister_controller.go:141] Starting autoregister controller
	I0807 19:26:14.580573       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0807 19:26:14.639342       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0807 19:26:14.639382       1 policy_source.go:224] refreshing policies
	I0807 19:26:14.639569       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0807 19:26:14.673005       1 shared_informer.go:320] Caches are synced for configmaps
	I0807 19:26:14.673152       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0807 19:26:14.673160       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0807 19:26:14.677310       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0807 19:26:14.683206       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0807 19:26:14.683632       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0807 19:26:14.683722       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0807 19:26:14.684174       1 cache.go:39] Caches are synced for autoregister controller
	I0807 19:26:14.684527       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0807 19:26:15.480845       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0807 19:26:16.194507       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0807 19:26:16.218211       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0807 19:26:16.265880       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0807 19:26:16.308244       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0807 19:26:16.315370       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0807 19:26:27.161280       1 controller.go:615] quota admission added evaluator for: endpoints
	I0807 19:26:27.167931       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5745e3db6bbeff92a7ece8a4b1087efa30747cd663cc05f5f61e63d0479df69f] <==
	I0807 19:26:27.050201       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0807 19:26:27.059328       1 shared_informer.go:320] Caches are synced for deployment
	I0807 19:26:27.063975       1 shared_informer.go:320] Caches are synced for persistent volume
	I0807 19:26:27.065161       1 shared_informer.go:320] Caches are synced for HPA
	I0807 19:26:27.066376       1 shared_informer.go:320] Caches are synced for expand
	I0807 19:26:27.069479       1 shared_informer.go:320] Caches are synced for stateful set
	I0807 19:26:27.070769       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0807 19:26:27.072770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.927868ms"
	I0807 19:26:27.073673       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.349µs"
	I0807 19:26:27.073742       1 shared_informer.go:320] Caches are synced for PV protection
	I0807 19:26:27.094597       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0807 19:26:27.097188       1 shared_informer.go:320] Caches are synced for GC
	I0807 19:26:27.099722       1 shared_informer.go:320] Caches are synced for cronjob
	I0807 19:26:27.114019       1 shared_informer.go:320] Caches are synced for PVC protection
	I0807 19:26:27.151173       1 shared_informer.go:320] Caches are synced for endpoint
	I0807 19:26:27.156308       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0807 19:26:27.186395       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 19:26:27.219923       1 shared_informer.go:320] Caches are synced for resource quota
	I0807 19:26:27.224795       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0807 19:26:27.230851       1 shared_informer.go:320] Caches are synced for attach detach
	I0807 19:26:27.261480       1 shared_informer.go:320] Caches are synced for namespace
	I0807 19:26:27.300660       1 shared_informer.go:320] Caches are synced for service account
	I0807 19:26:27.720032       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 19:26:27.736411       1 shared_informer.go:320] Caches are synced for garbage collector
	I0807 19:26:27.736676       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91] <==
	I0807 19:26:05.849670       1 serving.go:380] Generated self-signed cert in-memory
	I0807 19:26:06.573837       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0807 19:26:06.576172       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:06.577729       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0807 19:26:06.578290       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 19:26:06.578383       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0807 19:26:06.578507       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3] <==
	I0807 19:26:06.062298       1 server_linux.go:69] "Using iptables proxy"
	I0807 19:26:07.908651       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.241"]
	I0807 19:26:07.945082       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 19:26:07.945194       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 19:26:07.945211       1 server_linux.go:165] "Using iptables Proxier"
	I0807 19:26:07.947771       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 19:26:07.947995       1 server.go:872] "Version info" version="v1.30.3"
	I0807 19:26:07.948022       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:07.949216       1 config.go:192] "Starting service config controller"
	I0807 19:26:07.949252       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 19:26:07.949302       1 config.go:101] "Starting endpoint slice config controller"
	I0807 19:26:07.949322       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 19:26:07.949853       1 config.go:319] "Starting node config controller"
	I0807 19:26:07.949882       1 shared_informer.go:313] Waiting for caches to sync for node config
	
	
	==> kube-proxy [e368f51ab11885adae2425f5439b3ce774785ae5e7a1d9f8505e1639210bf6a8] <==
	I0807 19:26:15.427193       1 server_linux.go:69] "Using iptables proxy"
	I0807 19:26:15.452992       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.241"]
	I0807 19:26:15.505423       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0807 19:26:15.505499       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0807 19:26:15.505524       1 server_linux.go:165] "Using iptables Proxier"
	I0807 19:26:15.509646       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0807 19:26:15.510246       1 server.go:872] "Version info" version="v1.30.3"
	I0807 19:26:15.510297       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:15.511419       1 config.go:192] "Starting service config controller"
	I0807 19:26:15.511471       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0807 19:26:15.511506       1 config.go:101] "Starting endpoint slice config controller"
	I0807 19:26:15.511521       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0807 19:26:15.512066       1 config.go:319] "Starting node config controller"
	I0807 19:26:15.514887       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0807 19:26:15.611894       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0807 19:26:15.612031       1 shared_informer.go:320] Caches are synced for service config
	I0807 19:26:15.616263       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d] <==
	I0807 19:26:05.914954       1 serving.go:380] Generated self-signed cert in-memory
	W0807 19:26:07.841445       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 19:26:07.841546       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 19:26:07.841571       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 19:26:07.841578       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 19:26:07.898232       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0807 19:26:07.898268       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:07.900509       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0807 19:26:07.900588       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0807 19:26:07.900767       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d7a0ac0038370f1056a2735923f760e37155b375a1edb66618c54e8a74b4c188] <==
	I0807 19:26:12.472684       1 serving.go:380] Generated self-signed cert in-memory
	W0807 19:26:14.586474       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0807 19:26:14.588996       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0807 19:26:14.589162       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0807 19:26:14.589191       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0807 19:26:14.620955       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0807 19:26:14.621070       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0807 19:26:14.622722       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0807 19:26:14.624208       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0807 19:26:14.625296       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0807 19:26:14.624222       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0807 19:26:14.725580       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.123145    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c95d2d175eb4c58d8aa8e679da35def3-usr-share-ca-certificates\") pod \"kube-apiserver-pause-302295\" (UID: \"c95d2d175eb4c58d8aa8e679da35def3\") " pod="kube-system/kube-apiserver-pause-302295"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: E0807 19:26:11.124029    3805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-302295?timeout=10s\": dial tcp 192.168.61.241:8443: connect: connection refused" interval="400ms"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.221159    3805 kubelet_node_status.go:73] "Attempting to register node" node="pause-302295"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: E0807 19:26:11.222240    3805 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.241:8443: connect: connection refused" node="pause-302295"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.345388    3805 scope.go:117] "RemoveContainer" containerID="abaceb4ef5b1a707e71e910691bae5e76c10af048ebe2598907d7b120a298876"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.347911    3805 scope.go:117] "RemoveContainer" containerID="b90d88fbf6c9138ff7e6018a8236b35cf00216ea6718a268bc2a2f856dcf4955"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.349199    3805 scope.go:117] "RemoveContainer" containerID="8b4194ce733615c9f28843de166d52f3b212faf9cba981d69168a7b645e35d91"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.350290    3805 scope.go:117] "RemoveContainer" containerID="bbd28aba92481d62b2bf4e55001fba2de20dc63f7560264e67526168ce72ce1d"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: E0807 19:26:11.525606    3805 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-302295?timeout=10s\": dial tcp 192.168.61.241:8443: connect: connection refused" interval="800ms"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: I0807 19:26:11.626129    3805 kubelet_node_status.go:73] "Attempting to register node" node="pause-302295"
	Aug 07 19:26:11 pause-302295 kubelet[3805]: E0807 19:26:11.626980    3805 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.241:8443: connect: connection refused" node="pause-302295"
	Aug 07 19:26:12 pause-302295 kubelet[3805]: I0807 19:26:12.428498    3805 kubelet_node_status.go:73] "Attempting to register node" node="pause-302295"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.761792    3805 kubelet_node_status.go:112] "Node was previously registered" node="pause-302295"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.762286    3805 kubelet_node_status.go:76] "Successfully registered node" node="pause-302295"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.763912    3805 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.764972    3805 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.906332    3805 apiserver.go:52] "Watching apiserver"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.908849    3805 topology_manager.go:215] "Topology Admit Handler" podUID="42922f3b-0eea-408a-90a7-679748a29fb0" podNamespace="kube-system" podName="kube-proxy-65jsz"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.908969    3805 topology_manager.go:215] "Topology Admit Handler" podUID="edd1b471-d406-425e-88b2-3a60d3a2dd2e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wt7kx"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.919012    3805 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.991456    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42922f3b-0eea-408a-90a7-679748a29fb0-xtables-lock\") pod \"kube-proxy-65jsz\" (UID: \"42922f3b-0eea-408a-90a7-679748a29fb0\") " pod="kube-system/kube-proxy-65jsz"
	Aug 07 19:26:14 pause-302295 kubelet[3805]: I0807 19:26:14.991560    3805 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42922f3b-0eea-408a-90a7-679748a29fb0-lib-modules\") pod \"kube-proxy-65jsz\" (UID: \"42922f3b-0eea-408a-90a7-679748a29fb0\") " pod="kube-system/kube-proxy-65jsz"
	Aug 07 19:26:15 pause-302295 kubelet[3805]: I0807 19:26:15.209863    3805 scope.go:117] "RemoveContainer" containerID="2bf0e87247a8595dd86e281673f9f21e42e2262a42d04abacde4f8a9ae025f79"
	Aug 07 19:26:15 pause-302295 kubelet[3805]: I0807 19:26:15.210655    3805 scope.go:117] "RemoveContainer" containerID="96fb66de2a1e9f1310c5b4cbe08725f3df7d442e65dcc76f3986ddabf36c1ed3"
	Aug 07 19:26:24 pause-302295 kubelet[3805]: I0807 19:26:24.086025    3805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0807 19:26:34.315955   76798 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19389-20864/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-302295 -n pause-302295
helpers_test.go:261: (dbg) Run:  kubectl --context pause-302295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (71.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7200.051s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-359039 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0807 19:35:52.671170   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/custom-flannel-853483/client.crt: no such file or directory
E0807 19:35:53.359588   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/enable-default-cni-853483/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (11m52s)
	TestNetworkPlugins/group (3m55s)
	TestStartStop (11m33s)
	TestStartStop/group/default-k8s-diff-port (3m58s)
	TestStartStop/group/default-k8s-diff-port/serial (3m58s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (10s)
	TestStartStop/group/embed-certs (1m46s)
	TestStartStop/group/embed-certs/serial (1m46s)
	TestStartStop/group/embed-certs/serial/Stop (38s)
	TestStartStop/group/no-preload (5m14s)
	TestStartStop/group/no-preload/serial (5m14s)
	TestStartStop/group/no-preload/serial/Stop (1m46s)
	TestStartStop/group/old-k8s-version (5m15s)
	TestStartStop/group/old-k8s-version/serial (5m15s)
	TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (29s)

                                                
                                                
goroutine 3427 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006ceb60, 0xc00097bbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000012948, {0x49d6100, 0x2b, 0x2b}, {0x26b6aa6?, 0xc00069bb00?, 0x4a92c80?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00049f180)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00049f180)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000658f00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 334 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 333
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2333 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0006bcb90, 0xf)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00098cf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006bcbc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005562f0, {0x369ab00, 0xc000790b10}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005562f0, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3369 [syscall, 1 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x168ed, 0xc001272a90, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0024d3f80)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0024d3f80)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001bad500)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001bad500)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014a16c0, 0xc001bad500)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStop({0x36bea70?, 0xc000414070?}, 0xc0014a16c0, {0xc0015b1c80?, 0x5518ce?}, {0x0?, 0xc0016f4f60?}, {0x551133?, 0x4a170f?}, {0xc0015c0000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:228 +0x17b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0014a16c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0014a16c0, 0xc001e11900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3340
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2764 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0017d73c0, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2759
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3244 [chan receive, 1 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0024b65c0, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3288
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 25 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 24
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 3218 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001918540, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3213
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1864 [chan receive, 3 minutes]:
testing.(*testContext).waitParallel(0xc0006ed950)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc000036b60, 0xc00125e240)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1664
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 333 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc000116750, 0xc001275f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0x60?, 0xc000116750, 0xc000116798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0xc0006cf040?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc0001fc180?, 0xc0009fc360?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 329
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 696 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0xc001aa8300, 0xc001bae5a0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 345
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 329 [chan receive, 70 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00092b680, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 354
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 328 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00123cf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 354
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2747 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2746
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2769 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2768
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2343 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006bcbc0, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2341
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 691 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0xc001aa9080, 0xc001a8af00)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 690
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2767 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0017d7390, 0xe)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001867620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0017d73c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0016fd800, {0x369ab00, 0xc001c4d470}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0016fd800, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2764
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3403 [IO wait]:
internal/poll.runtime_pollWait(0x7f18be26dc80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00098d380?, 0xc0018e4a55?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00098d380, {0xc0018e4a55, 0x5ab, 0x5ab})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00125ca38, {0xc0018e4a55?, 0xc001417530?, 0x213?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001376870, {0x36995a0, 0xc0007ca120})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996e0, 0xc001376870}, {0x36995a0, 0xc0007ca120}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00125ca38?, {0x36996e0, 0xc001376870})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00125ca38, {0x36996e0, 0xc001376870})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996e0, 0xc001376870}, {0x3699600, 0xc00125ca38}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000060540?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3402
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 229 [IO wait, 78 minutes]:
internal/poll.runtime_pollWait(0x7f18be26e820, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xd?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000503080)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000503080)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0001365e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0001365e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007be0f0, {0x36b1ac0, 0xc0001365e0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0007be0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc001514340?, 0xc001514340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 226
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1664 [chan receive, 12 minutes]:
testing.(*T).Run(0xc0014a0000, {0x265c089?, 0x55127c?}, 0xc00125e240)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0014a0000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0014a0000, 0x313f358)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2768 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc001413f50, 0xc001413f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0x0?, 0xc001413f50, 0xc001413f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0x99b656?, 0xc001a88180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001413fd0?, 0x592e44?, 0xc000061b00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2764
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2573 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2572
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 332 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00092b650, 0x20)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00123cde0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00092b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008abd30, {0x369ab00, 0xc0006f4960}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008abd30, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 329
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3278 [select, 1 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bac780, 0xc001bae8a0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3275
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3217 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001bfe4e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3213
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2284 [chan receive, 1 minutes]:
testing.(*T).Run(0xc001bfd040, {0x265d634?, 0x0?}, 0xc001628400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001bfd040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001bfd040, 0xc001be8cc0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2278
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3001 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc000111750, 0xc000111798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0x0?, 0xc000111750, 0xc000111798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0xc0000369c0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc000209980?, 0xc000129e00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2940
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3002 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3001
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2281 [chan receive, 3 minutes]:
testing.(*T).Run(0xc001bfcb60, {0x265d634?, 0x0?}, 0xc001628100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001bfcb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001bfcb60, 0xc001be8bc0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2278
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3276 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7f18be26e630, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001e08b40?, 0xc001409c29?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001e08b40, {0xc001409c29, 0x3d7, 0x3d7})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007ca178, {0xc001409c29?, 0x7f18bc8345c8?, 0x29?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0012d2a50, {0x36995a0, 0xc001500600})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996e0, 0xc0012d2a50}, {0x36995a0, 0xc001500600}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0007ca178?, {0x36996e0, 0xc0012d2a50})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0007ca178, {0x36996e0, 0xc0012d2a50})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996e0, 0xc0012d2a50}, {0x3699600, 0xc0007ca178}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0016d12c0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3275
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3000 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0016a6450, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0011cdc80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0016a6480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000557560, {0x369ab00, 0xc002048e40}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000557560, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2940
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2278 [chan receive, 12 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001bfc680, 0x313f578)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1768
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3405 [select]:
os/exec.(*Cmd).watchCtx(0xc000003380, 0xc001a8a780)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3402
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2745 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000217650, 0xe)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001716960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0002176c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0011d52f0, {0x369ab00, 0xc001ac50b0}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0011d52f0, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2715
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3243 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002620960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3288
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3404 [IO wait]:
internal/poll.runtime_pollWait(0x7f18be26d998, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00098d440?, 0xc0013fd9c3?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00098d440, {0xc0013fd9c3, 0x63d, 0x63d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00125ca50, {0xc0013fd9c3?, 0x7f18bc8345c8?, 0x2000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013768a0, {0x36995a0, 0xc0015001f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996e0, 0xc0013768a0}, {0x36995a0, 0xc0015001f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00125ca50?, {0x36996e0, 0xc0013768a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00125ca50, {0x36996e0, 0xc0013768a0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996e0, 0xc0013768a0}, {0x3699600, 0xc00125ca50}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001592180?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3402
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2897 [chan receive]:
testing.(*T).Run(0xc001bfc000, {0x2687e40?, 0x60400000004?}, 0xc001592000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001bfc000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001bfc000, 0xc001628000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2279
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3275 [syscall, 1 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x1661b, 0xc001274a90, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0024d2a20)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0024d2a20)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001bac780)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001bac780)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014a09c0, 0xc001bac780)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStop({0x36bea70?, 0xc0004941c0?}, 0xc0014a09c0, {0xc000179b18?, 0x5518ce?}, {0x0?, 0xc00154c760?}, {0x551133?, 0x4a170f?}, {0xc0001cd200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:228 +0x17b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0014a09c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0014a09c0, 0xc001e10300)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2966
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3294 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3293
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3396 [select]:
os/exec.(*Cmd).watchCtx(0xc0001fca80, 0xc001a8a120)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3345
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2279 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001bfc820, {0x265d634?, 0x0?}, 0xc001628000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001bfc820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001bfc820, 0xc001be8b40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2278
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2978 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0015cf680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2977
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3161 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001918510, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001bfe3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001918540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0017bc190, {0x369ab00, 0xc001832630}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017bc190, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3218
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2714 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001716a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2653
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2966 [chan receive, 1 minutes]:
testing.(*T).Run(0xc001bfc4e0, {0x265b234?, 0x60400000004?}, 0xc001e10300)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001bfc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001bfc4e0, 0xc001628200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2282
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2715 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0002176c0, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2653
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2572 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc0012eb750, 0xc0012eb798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0x20?, 0xc0012eb750, 0xc0012eb798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0x99b656?, 0xc00151d200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012eb7d0?, 0x592e44?, 0xc001a8a720?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2456
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3293 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc0012f3f50, 0xc0012f3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0x7?, 0xc0012f3f50, 0xc0012f3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0xc001514ea0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012f3fd0?, 0x592e44?, 0xc00165f410?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3244
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2979 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0024b6580, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2977
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 780 [chan send, 69 minutes]:
os/exec.(*Cmd).watchCtx(0xc001efcf00, 0xc000061ec0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 779
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2342 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00098d020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2341
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2571 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00092b250, 0xf)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001bfe840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00092b300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001c00260, {0x369ab00, 0xc0014e61e0}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001c00260, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2456
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2335 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2334
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 830 [select, 69 minutes]:
net/http.(*persistConn).readLoop(0xc0017b7d40)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 828
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3176 [chan receive]:
testing.(*T).Run(0xc001bfd1e0, {0x2669486?, 0x60400000004?}, 0xc001592180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001bfd1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001bfd1e0, 0xc001628100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2281
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3292 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0024b6510, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002620840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0024b65c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000556890, {0x369ab00, 0xc0016e2270}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000556890, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3244
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3394 [IO wait]:
internal/poll.runtime_pollWait(0x7f18be26e348, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00098c120?, 0xc0013ff50b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00098c120, {0xc0013ff50b, 0x2f5, 0x2f5})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00125c920, {0xc0013ff50b?, 0xc000494004?, 0xd0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013762a0, {0x36995a0, 0xc0007ca0a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996e0, 0xc0013762a0}, {0x36995a0, 0xc0007ca0a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00125c920?, {0x36996e0, 0xc0013762a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00125c920, {0x36996e0, 0xc0013762a0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996e0, 0xc0013762a0}, {0x3699600, 0xc00125c920}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001592000?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3345
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2746 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc0012f6f50, 0xc0012f6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0x20?, 0xc0012f6f50, 0xc0012f6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012f6fd0?, 0x592e44?, 0xc001a8aa20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2715
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3162 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc001411f50, 0xc001411f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0x80?, 0xc001411f50, 0xc001411f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0xc001bfd520?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc0001fca80?, 0xc00127a480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3218
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2455 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001bfe960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2454
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 831 [select, 69 minutes]:
net/http.(*persistConn).writeLoop(0xc0017b7d40)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 828
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3395 [IO wait]:
internal/poll.runtime_pollWait(0x7f18be26e158, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00098c240?, 0xc0012a8400?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00098c240, {0xc0012a8400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00125c938, {0xc0012a8400?, 0xc000113530?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013762d0, {0x36995a0, 0xc001500130})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996e0, 0xc0013762d0}, {0x36995a0, 0xc001500130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00125c938?, {0x36996e0, 0xc0013762d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00125c938, {0x36996e0, 0xc0013762d0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996e0, 0xc0013762d0}, {0x3699600, 0xc00125c938}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0016d02a0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3345
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2334 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc0016f5750, 0xc0016f5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0x80?, 0xc0016f5750, 0xc0016f5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0xc001bfc000?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0016f57d0?, 0x592e44?, 0xc002048210?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2343
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2403 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc0012e7750, 0xc0012e7798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0xa0?, 0xc0012e7750, 0xc0012e7798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0xc0014a11e0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012e77d0?, 0x592e44?, 0xc00127a5a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2352
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2404 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2403
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1768 [chan receive, 12 minutes]:
testing.(*T).Run(0xc0014a0820, {0x265c089?, 0x551133?}, 0x313f578)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0014a0820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0014a0820, 0x313f3a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2908 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0024b6550, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0015cf560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0024b6580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005561d0, {0x369ab00, 0xc002048030}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005561d0, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2979
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3370 [IO wait, 1 minutes]:
internal/poll.runtime_pollWait(0x7f18be26e060, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00098c8a0?, 0xc00154682a?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00098c8a0, {0xc00154682a, 0x3d6, 0x3d6})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007ca3c8, {0xc00154682a?, 0x7ffffdc98278?, 0x2a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0013779e0, {0x36995a0, 0xc0015009b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996e0, 0xc0013779e0}, {0x36995a0, 0xc0015009b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0007ca3c8?, {0x36996e0, 0xc0013779e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0007ca3c8, {0x36996e0, 0xc0013779e0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996e0, 0xc0013779e0}, {0x3699600, 0xc0007ca3c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001e11900?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3369
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3345 [syscall]:
syscall.Syscall6(0xf7, 0x1, 0x16986, 0xc00086b9b8, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001a53d10)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001a53d10)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0001fca80)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0001fca80)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0015141a0, 0xc0001fca80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateEnableAddonWhileActive({0x36bea70, 0xc0004940e0}, 0xc0015141a0, {0xc000179830, 0x16}, {0x2673418, 0xf}, {0x551133?, 0x4a170f?}, {0xc0012fc180, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:205 +0x1d5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0015141a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0015141a0, 0xc001592000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2897
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2282 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001bfcd00, {0x265d634?, 0x0?}, 0xc001628200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001bfcd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001bfcd00, 0xc001be8c00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2278
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3340 [chan receive, 1 minutes]:
testing.(*T).Run(0xc001841040, {0x265b234?, 0x60400000004?}, 0xc001e11900)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001841040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001841040, 0xc001628400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2284
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2402 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc00081ef50, 0xf)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001aaa840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00081f0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008abb70, {0x369ab00, 0xc0020489f0}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008abb70, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2352
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2351 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001aaa960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2346
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2352 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00081f0c0, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2346
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2940 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0016a6480, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2996
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2456 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00092b300, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2454
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2909 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bec30, 0xc0000602a0}, 0xc000115f50, 0xc000115f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bec30, 0xc0000602a0}, 0x60?, 0xc000115f50, 0xc000115f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bec30?, 0xc0000602a0?}, 0x99b656?, 0xc001e35680?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000115fd0?, 0x592e44?, 0xc001baf860?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2979
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2910 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2909
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3163 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3162
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2939 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0011cdda0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2996
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2763 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001867740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2759
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3277 [IO wait]:
internal/poll.runtime_pollWait(0x7f18be26db88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001e08c00?, 0xc001545dcf?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001e08c00, {0xc001545dcf, 0x231, 0x231})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007ca198, {0xc001545dcf?, 0x7ffffdc98278?, 0x1e6b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0012d2a80, {0x36995a0, 0xc001662590})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996e0, 0xc0012d2a80}, {0x36995a0, 0xc001662590}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0007ca198?, {0x36996e0, 0xc0012d2a80})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0007ca198, {0x36996e0, 0xc0012d2a80})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996e0, 0xc0012d2a80}, {0x3699600, 0xc0007ca198}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001e10300?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3275
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3402 [syscall]:
syscall.Syscall6(0xf7, 0x1, 0x16a24, 0xc0000a8ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001be04b0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001be04b0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000003380)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000003380)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001514820, 0xc000003380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bea70, 0xc00046a070}, 0xc001514820, {0xc00149e020, 0x1c}, {0x0?, 0xc00154c760?}, {0x551133?, 0x4a170f?}, {0xc000904600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001514820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001514820, 0xc001592180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3176
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3372 [select, 1 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bad500, 0xc001baf920)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3369
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3371 [IO wait]:
internal/poll.runtime_pollWait(0x7f18be26e538, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00098c960?, 0xc0014020c3?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00098c960, {0xc0014020c3, 0x1f3d, 0x1f3d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007ca3e0, {0xc0014020c3?, 0xc000414070?, 0x1e29?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001377a10, {0x36995a0, 0xc00125cb28})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996e0, 0xc001377a10}, {0x36995a0, 0xc00125cb28}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0007ca3e0?, {0x36996e0, 0xc001377a10})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0007ca3e0, {0x36996e0, 0xc001377a10})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996e0, 0xc001377a10}, {0x3699600, 0xc0007ca3e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001593780?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3369
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                    

Test pass (168/215)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 30.62
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 14.45
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-rc.0/json-events 16.2
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.99
31 TestOffline 98.8
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
37 TestCertOptions 74.5
38 TestCertExpiration 279.71
40 TestForceSystemdFlag 76.49
41 TestForceSystemdEnv 46.02
43 TestKVMDriverInstallOrUpdate 7.48
47 TestErrorSpam/setup 44.55
48 TestErrorSpam/start 0.33
49 TestErrorSpam/status 0.72
50 TestErrorSpam/pause 1.55
51 TestErrorSpam/unpause 1.56
52 TestErrorSpam/stop 4.64
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 53.6
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 44.8
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.28
64 TestFunctional/serial/CacheCmd/cache/add_local 2.21
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.59
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
72 TestFunctional/serial/ExtraConfig 403.31
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.18
75 TestFunctional/serial/LogsFileCmd 1.18
76 TestFunctional/serial/InvalidService 4.41
78 TestFunctional/parallel/ConfigCmd 0.3
79 TestFunctional/parallel/DashboardCmd 14.7
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.13
82 TestFunctional/parallel/StatusCmd 0.8
86 TestFunctional/parallel/ServiceCmdConnect 13.61
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 49.39
90 TestFunctional/parallel/SSHCmd 0.41
91 TestFunctional/parallel/CpCmd 1.25
92 TestFunctional/parallel/MySQL 26.2
93 TestFunctional/parallel/FileSync 0.21
94 TestFunctional/parallel/CertSync 1.44
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
102 TestFunctional/parallel/License 0.65
103 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
104 TestFunctional/parallel/Version/short 0.04
105 TestFunctional/parallel/Version/components 0.54
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
110 TestFunctional/parallel/ImageCommands/ImageBuild 4
111 TestFunctional/parallel/ImageCommands/Setup 2
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.66
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.02
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.82
119 TestFunctional/parallel/ImageCommands/ImageRemove 1.31
121 TestFunctional/parallel/ServiceCmd/List 0.34
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.9
124 TestFunctional/parallel/ServiceCmd/Format 0.36
125 TestFunctional/parallel/ServiceCmd/URL 0.37
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.13
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
137 TestFunctional/parallel/ProfileCmd/profile_list 0.26
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
139 TestFunctional/parallel/MountCmd/any-port 12.8
140 TestFunctional/parallel/MountCmd/specific-port 1.97
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
142 TestFunctional/delete_echo-server_images 0.04
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestMultiControlPlane/serial/StartCluster 273.92
149 TestMultiControlPlane/serial/DeployApp 6.5
150 TestMultiControlPlane/serial/PingHostFromPods 1.25
151 TestMultiControlPlane/serial/AddWorkerNode 84.79
152 TestMultiControlPlane/serial/NodeLabels 0.06
153 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
154 TestMultiControlPlane/serial/CopyFile 12.71
156 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
158 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
160 TestMultiControlPlane/serial/DeleteSecondaryNode 17.13
161 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
163 TestMultiControlPlane/serial/RestartCluster 351.78
164 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
165 TestMultiControlPlane/serial/AddSecondaryNode 82.28
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
170 TestJSONOutput/start/Command 58.6
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.72
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.64
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 7.32
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.19
198 TestMainNoArgs 0.04
199 TestMinikubeProfile 89.16
202 TestMountStart/serial/StartWithMountFirst 27.34
203 TestMountStart/serial/VerifyMountFirst 0.36
204 TestMountStart/serial/StartWithMountSecond 29.59
205 TestMountStart/serial/VerifyMountSecond 0.36
206 TestMountStart/serial/DeleteFirst 0.67
207 TestMountStart/serial/VerifyMountPostDelete 0.36
208 TestMountStart/serial/Stop 1.27
209 TestMountStart/serial/RestartStopped 23.51
210 TestMountStart/serial/VerifyMountPostStop 0.37
213 TestMultiNode/serial/FreshStart2Nodes 125.59
214 TestMultiNode/serial/DeployApp2Nodes 5.53
215 TestMultiNode/serial/PingHostFrom2Pods 0.82
216 TestMultiNode/serial/AddNode 54.93
217 TestMultiNode/serial/MultiNodeLabels 0.06
218 TestMultiNode/serial/ProfileList 0.21
219 TestMultiNode/serial/CopyFile 6.96
220 TestMultiNode/serial/StopNode 2.31
221 TestMultiNode/serial/StartAfterStop 39.61
223 TestMultiNode/serial/DeleteNode 2.35
225 TestMultiNode/serial/RestartMultiNode 181.89
226 TestMultiNode/serial/ValidateNameConflict 44.49
233 TestScheduledStopUnix 111.92
237 TestRunningBinaryUpgrade 220.9
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
243 TestNoKubernetes/serial/StartWithK8s 97.34
244 TestNoKubernetes/serial/StartWithStopK8s 27.7
245 TestStoppedBinaryUpgrade/Setup 2.62
246 TestStoppedBinaryUpgrade/Upgrade 98.58
247 TestNoKubernetes/serial/Start 30.24
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
249 TestNoKubernetes/serial/ProfileList 1.72
250 TestNoKubernetes/serial/Stop 1.29
251 TestNoKubernetes/serial/StartNoArgs 39.24
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
253 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
266 TestPause/serial/Start 63.45
x
+
TestDownloadOnly/v1.20.0/json-events (30.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-923814 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-923814 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (30.624159565s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (30.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-923814
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-923814: exit status 85 (55.333541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-923814 | jenkins | v1.33.1 | 07 Aug 24 17:35 UTC |          |
	|         | -p download-only-923814        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:35:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:35:55.827030   28064 out.go:291] Setting OutFile to fd 1 ...
	I0807 17:35:55.827277   28064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:35:55.827286   28064 out.go:304] Setting ErrFile to fd 2...
	I0807 17:35:55.827291   28064 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:35:55.827459   28064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	W0807 17:35:55.827596   28064 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19389-20864/.minikube/config/config.json: open /home/jenkins/minikube-integration/19389-20864/.minikube/config/config.json: no such file or directory
	I0807 17:35:55.828131   28064 out.go:298] Setting JSON to true
	I0807 17:35:55.829004   28064 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4702,"bootTime":1723047454,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 17:35:55.829082   28064 start.go:139] virtualization: kvm guest
	I0807 17:35:55.831554   28064 out.go:97] [download-only-923814] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0807 17:35:55.831661   28064 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball: no such file or directory
	I0807 17:35:55.831708   28064 notify.go:220] Checking for updates...
	I0807 17:35:55.833456   28064 out.go:169] MINIKUBE_LOCATION=19389
	I0807 17:35:55.835099   28064 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:35:55.836911   28064 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 17:35:55.838395   28064 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 17:35:55.839777   28064 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0807 17:35:55.842526   28064 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 17:35:55.842749   28064 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:35:55.941650   28064 out.go:97] Using the kvm2 driver based on user configuration
	I0807 17:35:55.941707   28064 start.go:297] selected driver: kvm2
	I0807 17:35:55.941713   28064 start.go:901] validating driver "kvm2" against <nil>
	I0807 17:35:55.942101   28064 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:35:55.942226   28064 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 17:35:55.956478   28064 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 17:35:55.956525   28064 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 17:35:55.956981   28064 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0807 17:35:55.957146   28064 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 17:35:55.957218   28064 cni.go:84] Creating CNI manager for ""
	I0807 17:35:55.957235   28064 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 17:35:55.957248   28064 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 17:35:55.957316   28064 start.go:340] cluster config:
	{Name:download-only-923814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-923814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:35:55.957519   28064 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:35:55.959540   28064 out.go:97] Downloading VM boot image ...
	I0807 17:35:55.959579   28064 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19389-20864/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0807 17:36:08.787429   28064 out.go:97] Starting "download-only-923814" primary control-plane node in "download-only-923814" cluster
	I0807 17:36:08.787457   28064 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0807 17:36:08.896943   28064 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0807 17:36:08.896972   28064 cache.go:56] Caching tarball of preloaded images
	I0807 17:36:08.897148   28064 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0807 17:36:08.899369   28064 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0807 17:36:08.899393   28064 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0807 17:36:09.016126   28064 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0807 17:36:24.547428   28064 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0807 17:36:24.547537   28064 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0807 17:36:25.448334   28064 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0807 17:36:25.448682   28064 profile.go:143] Saving config to /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/download-only-923814/config.json ...
	I0807 17:36:25.448717   28064 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/download-only-923814/config.json: {Name:mk9bb9aaa037586d4c935881e9ae415c782b4f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0807 17:36:25.448935   28064 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0807 17:36:25.449114   28064 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19389-20864/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-923814 host does not exist
	  To start a cluster, run: "minikube start -p download-only-923814"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-923814
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (14.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-458549 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-458549 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.451395144s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (14.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-458549
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-458549: exit status 85 (56.037126ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-923814 | jenkins | v1.33.1 | 07 Aug 24 17:35 UTC |                     |
	|         | -p download-only-923814        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 07 Aug 24 17:36 UTC | 07 Aug 24 17:36 UTC |
	| delete  | -p download-only-923814        | download-only-923814 | jenkins | v1.33.1 | 07 Aug 24 17:36 UTC | 07 Aug 24 17:36 UTC |
	| start   | -o=json --download-only        | download-only-458549 | jenkins | v1.33.1 | 07 Aug 24 17:36 UTC |                     |
	|         | -p download-only-458549        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:36:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:36:26.759331   28350 out.go:291] Setting OutFile to fd 1 ...
	I0807 17:36:26.759584   28350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:36:26.759593   28350 out.go:304] Setting ErrFile to fd 2...
	I0807 17:36:26.759597   28350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:36:26.759754   28350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 17:36:26.760303   28350 out.go:298] Setting JSON to true
	I0807 17:36:26.761097   28350 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4733,"bootTime":1723047454,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 17:36:26.761153   28350 start.go:139] virtualization: kvm guest
	I0807 17:36:26.763201   28350 out.go:97] [download-only-458549] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 17:36:26.763332   28350 notify.go:220] Checking for updates...
	I0807 17:36:26.764730   28350 out.go:169] MINIKUBE_LOCATION=19389
	I0807 17:36:26.766231   28350 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:36:26.767459   28350 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 17:36:26.768819   28350 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 17:36:26.770325   28350 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0807 17:36:26.772631   28350 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 17:36:26.772871   28350 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:36:26.804241   28350 out.go:97] Using the kvm2 driver based on user configuration
	I0807 17:36:26.804263   28350 start.go:297] selected driver: kvm2
	I0807 17:36:26.804269   28350 start.go:901] validating driver "kvm2" against <nil>
	I0807 17:36:26.804633   28350 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:36:26.804754   28350 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 17:36:26.819396   28350 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 17:36:26.819445   28350 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 17:36:26.820062   28350 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0807 17:36:26.820282   28350 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 17:36:26.820314   28350 cni.go:84] Creating CNI manager for ""
	I0807 17:36:26.820327   28350 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 17:36:26.820344   28350 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 17:36:26.820412   28350 start.go:340] cluster config:
	{Name:download-only-458549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-458549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:36:26.820537   28350 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:36:26.822136   28350 out.go:97] Starting "download-only-458549" primary control-plane node in "download-only-458549" cluster
	I0807 17:36:26.822166   28350 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 17:36:26.993650   28350 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0807 17:36:26.993674   28350 cache.go:56] Caching tarball of preloaded images
	I0807 17:36:26.993812   28350 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0807 17:36:26.995735   28350 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0807 17:36:26.995754   28350 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0807 17:36:27.113798   28350 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-458549 host does not exist
	  To start a cluster, run: "minikube start -p download-only-458549"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-458549
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (16.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-968763 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-968763 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.197711866s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (16.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-968763
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-968763: exit status 85 (55.513025ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-923814 | jenkins | v1.33.1 | 07 Aug 24 17:35 UTC |                     |
	|         | -p download-only-923814           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 07 Aug 24 17:36 UTC | 07 Aug 24 17:36 UTC |
	| delete  | -p download-only-923814           | download-only-923814 | jenkins | v1.33.1 | 07 Aug 24 17:36 UTC | 07 Aug 24 17:36 UTC |
	| start   | -o=json --download-only           | download-only-458549 | jenkins | v1.33.1 | 07 Aug 24 17:36 UTC |                     |
	|         | -p download-only-458549           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 07 Aug 24 17:36 UTC | 07 Aug 24 17:36 UTC |
	| delete  | -p download-only-458549           | download-only-458549 | jenkins | v1.33.1 | 07 Aug 24 17:36 UTC | 07 Aug 24 17:36 UTC |
	| start   | -o=json --download-only           | download-only-968763 | jenkins | v1.33.1 | 07 Aug 24 17:36 UTC |                     |
	|         | -p download-only-968763           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/07 17:36:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0807 17:36:41.523751   28568 out.go:291] Setting OutFile to fd 1 ...
	I0807 17:36:41.523984   28568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:36:41.523994   28568 out.go:304] Setting ErrFile to fd 2...
	I0807 17:36:41.523998   28568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 17:36:41.524279   28568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 17:36:41.524874   28568 out.go:298] Setting JSON to true
	I0807 17:36:41.525731   28568 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4748,"bootTime":1723047454,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 17:36:41.525785   28568 start.go:139] virtualization: kvm guest
	I0807 17:36:41.528119   28568 out.go:97] [download-only-968763] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 17:36:41.528275   28568 notify.go:220] Checking for updates...
	I0807 17:36:41.529614   28568 out.go:169] MINIKUBE_LOCATION=19389
	I0807 17:36:41.531147   28568 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 17:36:41.532461   28568 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 17:36:41.533748   28568 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 17:36:41.535166   28568 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0807 17:36:41.537850   28568 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0807 17:36:41.538070   28568 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 17:36:41.570610   28568 out.go:97] Using the kvm2 driver based on user configuration
	I0807 17:36:41.570644   28568 start.go:297] selected driver: kvm2
	I0807 17:36:41.570651   28568 start.go:901] validating driver "kvm2" against <nil>
	I0807 17:36:41.571003   28568 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:36:41.571081   28568 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19389-20864/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0807 17:36:41.586295   28568 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0807 17:36:41.586352   28568 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0807 17:36:41.586888   28568 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0807 17:36:41.587046   28568 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0807 17:36:41.587079   28568 cni.go:84] Creating CNI manager for ""
	I0807 17:36:41.587090   28568 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0807 17:36:41.587110   28568 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0807 17:36:41.587188   28568 start.go:340] cluster config:
	{Name:download-only-968763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-968763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 17:36:41.587294   28568 iso.go:125] acquiring lock: {Name:mkf212fcb23c5f8609a2c03b42fcca30ca8c42d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0807 17:36:41.589201   28568 out.go:97] Starting "download-only-968763" primary control-plane node in "download-only-968763" cluster
	I0807 17:36:41.589234   28568 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0807 17:36:42.196642   28568 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0807 17:36:42.196675   28568 cache.go:56] Caching tarball of preloaded images
	I0807 17:36:42.196836   28568 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0807 17:36:42.268825   28568 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0807 17:36:42.268903   28568 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0807 17:36:42.901863   28568 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:89b2d75682ccec9e5b50b57ad7b65741 -> /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0807 17:36:56.032609   28568 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0807 17:36:56.032720   28568 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19389-20864/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-968763 host does not exist
	  To start a cluster, run: "minikube start -p download-only-968763"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-968763
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.99s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-897433 --alsologtostderr --binary-mirror http://127.0.0.1:37491 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-897433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-897433
--- PASS: TestBinaryMirror (0.99s)

                                                
                                    
x
+
TestOffline (98.8s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-132486 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-132486 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.967663353s)
helpers_test.go:175: Cleaning up "offline-crio-132486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-132486
--- PASS: TestOffline (98.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-533488
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-533488: exit status 85 (45.613972ms)

                                                
                                                
-- stdout --
	* Profile "addons-533488" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-533488"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-533488
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-533488: exit status 85 (48.244845ms)

                                                
                                                
-- stdout --
	* Profile "addons-533488" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-533488"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (74.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-405893 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-405893 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.055084224s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-405893 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-405893 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-405893 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-405893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-405893
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-405893: (1.002347776s)
--- PASS: TestCertOptions (74.50s)

                                                
                                    
x
+
TestCertExpiration (279.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-260571 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-260571 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (56.958378986s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-260571 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-260571 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (41.580814793s)
helpers_test.go:175: Cleaning up "cert-expiration-260571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-260571
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-260571: (1.170478792s)
--- PASS: TestCertExpiration (279.71s)

                                                
                                    
x
+
TestForceSystemdFlag (76.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-992969 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-992969 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.225112136s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-992969 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-992969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-992969
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-992969: (1.039362523s)
--- PASS: TestForceSystemdFlag (76.49s)

                                                
                                    
x
+
TestForceSystemdEnv (46.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-493959 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-493959 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.023867373s)
helpers_test.go:175: Cleaning up "force-systemd-env-493959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-493959
--- PASS: TestForceSystemdEnv (46.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.48s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (7.48s)

                                                
                                    
x
+
TestErrorSpam/setup (44.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-068172 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-068172 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-068172 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-068172 --driver=kvm2  --container-runtime=crio: (44.549974752s)
--- PASS: TestErrorSpam/setup (44.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (4.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 stop: (1.633859618s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 stop: (1.554193749s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-068172 --log_dir /tmp/nospam-068172 stop: (1.455781493s)
--- PASS: TestErrorSpam/stop (4.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19389-20864/.minikube/files/etc/test/nested/copy/28052/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965692 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-965692 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (53.598028279s)
--- PASS: TestFunctional/serial/StartWithProxy (53.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.8s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965692 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-965692 --alsologtostderr -v=8: (44.802799821s)
functional_test.go:659: soft start took 44.80362559s for "functional-965692" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.80s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-965692 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 cache add registry.k8s.io/pause:3.3: (1.283972473s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 cache add registry.k8s.io/pause:latest: (1.0188719s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-965692 /tmp/TestFunctionalserialCacheCmdcacheadd_local3574376974/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 cache add minikube-local-cache-test:functional-965692
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 cache add minikube-local-cache-test:functional-965692: (1.873217017s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 cache delete minikube-local-cache-test:functional-965692
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-965692
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.870914ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 kubectl -- --context functional-965692 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-965692 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (403.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965692 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-965692 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m43.308963212s)
functional_test.go:757: restart took 6m43.309099521s for "functional-965692" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (403.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-965692 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 logs: (1.177906756s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 logs --file /tmp/TestFunctionalserialLogsFileCmd4174696923/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 logs --file /tmp/TestFunctionalserialLogsFileCmd4174696923/001/logs.txt: (1.177529276s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-965692 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-965692
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-965692: exit status 115 (281.049653ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.13:30415 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-965692 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 config get cpus: exit status 14 (53.752676ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 config get cpus: exit status 14 (43.514171ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-965692 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-965692 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 43286: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965692 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-965692 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (139.590289ms)

                                                
                                                
-- stdout --
	* [functional-965692] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:26:57.769972   43154 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:26:57.770063   43154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:26:57.770067   43154 out.go:304] Setting ErrFile to fd 2...
	I0807 18:26:57.770071   43154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:26:57.770277   43154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:26:57.770792   43154 out.go:298] Setting JSON to false
	I0807 18:26:57.771700   43154 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7764,"bootTime":1723047454,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 18:26:57.771758   43154 start.go:139] virtualization: kvm guest
	I0807 18:26:57.774044   43154 out.go:177] * [functional-965692] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0807 18:26:57.775687   43154 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:26:57.775743   43154 notify.go:220] Checking for updates...
	I0807 18:26:57.779059   43154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:26:57.780661   43154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:26:57.782356   43154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:26:57.783924   43154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 18:26:57.785271   43154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:26:57.787204   43154 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:26:57.787586   43154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:26:57.787658   43154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:26:57.802906   43154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35717
	I0807 18:26:57.803402   43154 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:26:57.803950   43154 main.go:141] libmachine: Using API Version  1
	I0807 18:26:57.803971   43154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:26:57.804388   43154 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:26:57.804589   43154 main.go:141] libmachine: (functional-965692) Calling .DriverName
	I0807 18:26:57.804856   43154 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:26:57.805168   43154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:26:57.805215   43154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:26:57.819791   43154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36853
	I0807 18:26:57.820296   43154 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:26:57.820800   43154 main.go:141] libmachine: Using API Version  1
	I0807 18:26:57.820825   43154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:26:57.821123   43154 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:26:57.821346   43154 main.go:141] libmachine: (functional-965692) Calling .DriverName
	I0807 18:26:57.855268   43154 out.go:177] * Using the kvm2 driver based on existing profile
	I0807 18:26:57.856488   43154 start.go:297] selected driver: kvm2
	I0807 18:26:57.856502   43154 start.go:901] validating driver "kvm2" against &{Name:functional-965692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-965692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:26:57.856619   43154 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:26:57.858593   43154 out.go:177] 
	W0807 18:26:57.859855   43154 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0807 18:26:57.861375   43154 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965692 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-965692 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-965692 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.456166ms)

                                                
                                                
-- stdout --
	* [functional-965692] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 18:26:58.030139   43210 out.go:291] Setting OutFile to fd 1 ...
	I0807 18:26:58.030244   43210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:26:58.030254   43210 out.go:304] Setting ErrFile to fd 2...
	I0807 18:26:58.030258   43210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 18:26:58.030537   43210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 18:26:58.031021   43210 out.go:298] Setting JSON to false
	I0807 18:26:58.031858   43210 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7764,"bootTime":1723047454,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0807 18:26:58.031913   43210 start.go:139] virtualization: kvm guest
	I0807 18:26:58.033968   43210 out.go:177] * [functional-965692] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0807 18:26:58.035832   43210 out.go:177]   - MINIKUBE_LOCATION=19389
	I0807 18:26:58.035903   43210 notify.go:220] Checking for updates...
	I0807 18:26:58.038525   43210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0807 18:26:58.039930   43210 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	I0807 18:26:58.041382   43210 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	I0807 18:26:58.042940   43210 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0807 18:26:58.044478   43210 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0807 18:26:58.046510   43210 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 18:26:58.046935   43210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:26:58.047015   43210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:26:58.062065   43210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35533
	I0807 18:26:58.062473   43210 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:26:58.063021   43210 main.go:141] libmachine: Using API Version  1
	I0807 18:26:58.063047   43210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:26:58.063432   43210 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:26:58.063623   43210 main.go:141] libmachine: (functional-965692) Calling .DriverName
	I0807 18:26:58.063871   43210 driver.go:392] Setting default libvirt URI to qemu:///system
	I0807 18:26:58.064146   43210 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 18:26:58.064188   43210 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 18:26:58.079009   43210 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0807 18:26:58.079450   43210 main.go:141] libmachine: () Calling .GetVersion
	I0807 18:26:58.080113   43210 main.go:141] libmachine: Using API Version  1
	I0807 18:26:58.080136   43210 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 18:26:58.080525   43210 main.go:141] libmachine: () Calling .GetMachineName
	I0807 18:26:58.080718   43210 main.go:141] libmachine: (functional-965692) Calling .DriverName
	I0807 18:26:58.114142   43210 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0807 18:26:58.115427   43210 start.go:297] selected driver: kvm2
	I0807 18:26:58.115443   43210 start.go:901] validating driver "kvm2" against &{Name:functional-965692 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723026928-19389@sha256:7715fe0c5dce35b4eb757765cbbe02d40cd8b5effa0639735e42ad89f4f51ef0 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-965692 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0807 18:26:58.115555   43210 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0807 18:26:58.117773   43210 out.go:177] 
	W0807 18:26:58.119159   43210 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0807 18:26:58.120547   43210 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-965692 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-965692 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-r54ss" [0f7b45f9-f844-4698-bff9-9dc96220c821] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-r54ss" [0f7b45f9-f844-4698-bff9-9dc96220c821] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.044318076s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.13:30358
functional_test.go:1671: http://192.168.39.13:30358: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-r54ss

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.13:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.13:30358
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5fb1c6dc-d643-4819-b478-dae4e0a83883] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003639853s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-965692 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-965692 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-965692 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-965692 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-965692 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [afaa5c42-ceb7-424a-afb6-0352152e4b36] Pending
helpers_test.go:344: "sp-pod" [afaa5c42-ceb7-424a-afb6-0352152e4b36] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [afaa5c42-ceb7-424a-afb6-0352152e4b36] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004252241s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-965692 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-965692 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-965692 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a06ca54e-6d3f-454b-bc4e-cbbd9f76072d] Pending
helpers_test.go:344: "sp-pod" [a06ca54e-6d3f-454b-bc4e-cbbd9f76072d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a06ca54e-6d3f-454b-bc4e-cbbd9f76072d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004528606s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-965692 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh -n functional-965692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 cp functional-965692:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2029823034/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh -n functional-965692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh -n functional-965692 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-965692 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-5rz46" [1e6018bb-e2bd-46e9-9269-8a9c70cd6773] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-5rz46" [1e6018bb-e2bd-46e9-9269-8a9c70cd6773] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003964457s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-965692 exec mysql-64454c8b5c-5rz46 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-965692 exec mysql-64454c8b5c-5rz46 -- mysql -ppassword -e "show databases;": exit status 1 (234.824756ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-965692 exec mysql-64454c8b5c-5rz46 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-965692 exec mysql-64454c8b5c-5rz46 -- mysql -ppassword -e "show databases;": exit status 1 (188.832233ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-965692 exec mysql-64454c8b5c-5rz46 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/28052/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo cat /etc/test/nested/copy/28052/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/28052.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo cat /etc/ssl/certs/28052.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/28052.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo cat /usr/share/ca-certificates/28052.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/280522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo cat /etc/ssl/certs/280522.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/280522.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo cat /usr/share/ca-certificates/280522.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-965692 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 ssh "sudo systemctl is-active docker": exit status 1 (226.844806ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 ssh "sudo systemctl is-active containerd": exit status 1 (238.292047ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-965692 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-965692 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-sb4tn" [be1dc998-f8cd-4cc0-9cc1-59d9dcf3f6dd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-sb4tn" [be1dc998-f8cd-4cc0-9cc1-59d9dcf3f6dd] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00465424s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965692 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-965692
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-965692
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965692 image ls --format short --alsologtostderr:
I0807 18:27:01.301788   43430 out.go:291] Setting OutFile to fd 1 ...
I0807 18:27:01.302034   43430 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:01.302044   43430 out.go:304] Setting ErrFile to fd 2...
I0807 18:27:01.302048   43430 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:01.302290   43430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
I0807 18:27:01.302853   43430 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:01.302967   43430 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:01.303402   43430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:01.303454   43430 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:01.318370   43430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
I0807 18:27:01.318922   43430 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:01.319464   43430 main.go:141] libmachine: Using API Version  1
I0807 18:27:01.319484   43430 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:01.319884   43430 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:01.320161   43430 main.go:141] libmachine: (functional-965692) Calling .GetState
I0807 18:27:01.322475   43430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:01.322540   43430 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:01.337307   43430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42989
I0807 18:27:01.337694   43430 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:01.338268   43430 main.go:141] libmachine: Using API Version  1
I0807 18:27:01.338293   43430 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:01.338640   43430 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:01.338831   43430 main.go:141] libmachine: (functional-965692) Calling .DriverName
I0807 18:27:01.339048   43430 ssh_runner.go:195] Run: systemctl --version
I0807 18:27:01.339081   43430 main.go:141] libmachine: (functional-965692) Calling .GetSSHHostname
I0807 18:27:01.342786   43430 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:01.343233   43430 main.go:141] libmachine: (functional-965692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:ff:90", ip: ""} in network mk-functional-965692: {Iface:virbr1 ExpiryTime:2024-08-07 19:18:09 +0000 UTC Type:0 Mac:52:54:00:74:ff:90 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-965692 Clientid:01:52:54:00:74:ff:90}
I0807 18:27:01.343260   43430 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined IP address 192.168.39.13 and MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:01.343614   43430 main.go:141] libmachine: (functional-965692) Calling .GetSSHPort
I0807 18:27:01.343828   43430 main.go:141] libmachine: (functional-965692) Calling .GetSSHKeyPath
I0807 18:27:01.344125   43430 main.go:141] libmachine: (functional-965692) Calling .GetSSHUsername
I0807 18:27:01.344308   43430 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/functional-965692/id_rsa Username:docker}
I0807 18:27:01.485283   43430 ssh_runner.go:195] Run: sudo crictl images --output json
I0807 18:27:01.560316   43430 main.go:141] libmachine: Making call to close driver server
I0807 18:27:01.560336   43430 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:01.560619   43430 main.go:141] libmachine: (functional-965692) DBG | Closing plugin on server side
I0807 18:27:01.560668   43430 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:01.560687   43430 main.go:141] libmachine: Making call to close connection to plugin binary
I0807 18:27:01.560705   43430 main.go:141] libmachine: Making call to close driver server
I0807 18:27:01.560718   43430 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:01.560962   43430 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:01.560978   43430 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965692 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kicbase/echo-server           | functional-965692  | 9056ab77afb8e | 4.94MB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| localhost/minikube-local-cache-test     | functional-965692  | 9fd0342a11782 | 3.33kB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/my-image                      | functional-965692  | 68d92e48147ff | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965692 image ls --format table --alsologtostderr:
I0807 18:27:06.195698   43827 out.go:291] Setting OutFile to fd 1 ...
I0807 18:27:06.195796   43827 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:06.195805   43827 out.go:304] Setting ErrFile to fd 2...
I0807 18:27:06.195812   43827 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:06.196043   43827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
I0807 18:27:06.196656   43827 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:06.196774   43827 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:06.197164   43827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:06.197220   43827 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:06.212640   43827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37295
I0807 18:27:06.213154   43827 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:06.213817   43827 main.go:141] libmachine: Using API Version  1
I0807 18:27:06.213873   43827 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:06.214247   43827 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:06.214447   43827 main.go:141] libmachine: (functional-965692) Calling .GetState
I0807 18:27:06.216592   43827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:06.216646   43827 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:06.231889   43827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
I0807 18:27:06.232359   43827 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:06.232820   43827 main.go:141] libmachine: Using API Version  1
I0807 18:27:06.232845   43827 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:06.233151   43827 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:06.233365   43827 main.go:141] libmachine: (functional-965692) Calling .DriverName
I0807 18:27:06.233567   43827 ssh_runner.go:195] Run: systemctl --version
I0807 18:27:06.233596   43827 main.go:141] libmachine: (functional-965692) Calling .GetSSHHostname
I0807 18:27:06.236275   43827 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:06.236675   43827 main.go:141] libmachine: (functional-965692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:ff:90", ip: ""} in network mk-functional-965692: {Iface:virbr1 ExpiryTime:2024-08-07 19:18:09 +0000 UTC Type:0 Mac:52:54:00:74:ff:90 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-965692 Clientid:01:52:54:00:74:ff:90}
I0807 18:27:06.236706   43827 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined IP address 192.168.39.13 and MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:06.236815   43827 main.go:141] libmachine: (functional-965692) Calling .GetSSHPort
I0807 18:27:06.237083   43827 main.go:141] libmachine: (functional-965692) Calling .GetSSHKeyPath
I0807 18:27:06.237231   43827 main.go:141] libmachine: (functional-965692) Calling .GetSSHUsername
I0807 18:27:06.237380   43827 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/functional-965692/id_rsa Username:docker}
I0807 18:27:06.385101   43827 ssh_runner.go:195] Run: sudo crictl images --output json
I0807 18:27:06.497725   43827 main.go:141] libmachine: Making call to close driver server
I0807 18:27:06.497745   43827 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:06.498038   43827 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:06.498063   43827 main.go:141] libmachine: Making call to close connection to plugin binary
I0807 18:27:06.498080   43827 main.go:141] libmachine: Making call to close driver server
I0807 18:27:06.498090   43827 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:06.499511   43827 main.go:141] libmachine: (functional-965692) DBG | Closing plugin on server side
I0807 18:27:06.499555   43827 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:06.499576   43827 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965692 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7
dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repo
Tags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"8b088c63c004f920c9fe13190e15134403c339bc4a6ada58ea7b447d18bd14c8","repoDigests":["docker.io/library/7a8bbd459ddeb3338dadd6967f5d2ff2b95bd0677f73a6180eb1b654e09c44ad-tmp@sha256:ae4e2179af37f4f035259fa93c6c84e342fa950b7f8d5333320842b34df044f9"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8
c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9fd0342a1178267f7e593745bbd0a79b5bd9b295f6bbb72f78bf1395697cbbbb","repoDigests":["localhost/minikube-local-cache-test@sha256:eebf0beb03c75cbe57d3e9142c4ae08aace5792b567087339b005407597d85a6"],"repoTags":["localhost/minikube-local-cache-test:functional-965692"],"size":"3330"},{"id":"68d92e48147ff1e59728549f9f37aac6d8a041e3cd6a966f86c870fa17dd9833","repoDigests":["localhost/my-image@sha256:8504d3133211d7945b7adba8df8bec7e8048bc4596344b1dd0577fd054e3cf73"],"repoTags":["localhost/my-image:functional-965692"],"size":"1468600"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registr
y.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busy
box@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-965692"],"size":"4943877"},{"id":"a72860
cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965692 image ls --format json --alsologtostderr:
I0807 18:27:05.867344   43773 out.go:291] Setting OutFile to fd 1 ...
I0807 18:27:05.867457   43773 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:05.867467   43773 out.go:304] Setting ErrFile to fd 2...
I0807 18:27:05.867475   43773 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:05.867642   43773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
I0807 18:27:05.868235   43773 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:05.868341   43773 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:05.868755   43773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:05.868809   43773 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:05.884409   43773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
I0807 18:27:05.884893   43773 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:05.885341   43773 main.go:141] libmachine: Using API Version  1
I0807 18:27:05.885356   43773 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:05.885639   43773 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:05.885780   43773 main.go:141] libmachine: (functional-965692) Calling .GetState
I0807 18:27:05.887701   43773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:05.887758   43773 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:05.903904   43773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
I0807 18:27:05.904398   43773 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:05.904915   43773 main.go:141] libmachine: Using API Version  1
I0807 18:27:05.904942   43773 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:05.905265   43773 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:05.905457   43773 main.go:141] libmachine: (functional-965692) Calling .DriverName
I0807 18:27:05.905650   43773 ssh_runner.go:195] Run: systemctl --version
I0807 18:27:05.905684   43773 main.go:141] libmachine: (functional-965692) Calling .GetSSHHostname
I0807 18:27:05.908866   43773 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:05.909225   43773 main.go:141] libmachine: (functional-965692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:ff:90", ip: ""} in network mk-functional-965692: {Iface:virbr1 ExpiryTime:2024-08-07 19:18:09 +0000 UTC Type:0 Mac:52:54:00:74:ff:90 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-965692 Clientid:01:52:54:00:74:ff:90}
I0807 18:27:05.909245   43773 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined IP address 192.168.39.13 and MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:05.909493   43773 main.go:141] libmachine: (functional-965692) Calling .GetSSHPort
I0807 18:27:05.909658   43773 main.go:141] libmachine: (functional-965692) Calling .GetSSHKeyPath
I0807 18:27:05.909804   43773 main.go:141] libmachine: (functional-965692) Calling .GetSSHUsername
I0807 18:27:05.909931   43773 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/functional-965692/id_rsa Username:docker}
I0807 18:27:06.027427   43773 ssh_runner.go:195] Run: sudo crictl images --output json
I0807 18:27:06.147092   43773 main.go:141] libmachine: Making call to close driver server
I0807 18:27:06.147120   43773 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:06.147381   43773 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:06.147400   43773 main.go:141] libmachine: Making call to close connection to plugin binary
I0807 18:27:06.147415   43773 main.go:141] libmachine: Making call to close driver server
I0807 18:27:06.147423   43773 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:06.147726   43773 main.go:141] libmachine: (functional-965692) DBG | Closing plugin on server side
I0807 18:27:06.147736   43773 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:06.147761   43773 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965692 image ls --format yaml --alsologtostderr:
- id: 9fd0342a1178267f7e593745bbd0a79b5bd9b295f6bbb72f78bf1395697cbbbb
repoDigests:
- localhost/minikube-local-cache-test@sha256:eebf0beb03c75cbe57d3e9142c4ae08aace5792b567087339b005407597d85a6
repoTags:
- localhost/minikube-local-cache-test:functional-965692
size: "3330"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-965692
size: "4943877"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965692 image ls --format yaml --alsologtostderr:
I0807 18:27:01.615011   43453 out.go:291] Setting OutFile to fd 1 ...
I0807 18:27:01.615284   43453 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:01.615297   43453 out.go:304] Setting ErrFile to fd 2...
I0807 18:27:01.615303   43453 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:01.615588   43453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
I0807 18:27:01.616377   43453 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:01.616526   43453 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:01.617140   43453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:01.617194   43453 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:01.631655   43453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46403
I0807 18:27:01.632121   43453 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:01.632714   43453 main.go:141] libmachine: Using API Version  1
I0807 18:27:01.632742   43453 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:01.633089   43453 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:01.633283   43453 main.go:141] libmachine: (functional-965692) Calling .GetState
I0807 18:27:01.634994   43453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:01.635074   43453 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:01.650686   43453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
I0807 18:27:01.651195   43453 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:01.651753   43453 main.go:141] libmachine: Using API Version  1
I0807 18:27:01.651780   43453 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:01.652135   43453 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:01.652364   43453 main.go:141] libmachine: (functional-965692) Calling .DriverName
I0807 18:27:01.652605   43453 ssh_runner.go:195] Run: systemctl --version
I0807 18:27:01.652635   43453 main.go:141] libmachine: (functional-965692) Calling .GetSSHHostname
I0807 18:27:01.656066   43453 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:01.656616   43453 main.go:141] libmachine: (functional-965692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:ff:90", ip: ""} in network mk-functional-965692: {Iface:virbr1 ExpiryTime:2024-08-07 19:18:09 +0000 UTC Type:0 Mac:52:54:00:74:ff:90 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-965692 Clientid:01:52:54:00:74:ff:90}
I0807 18:27:01.656649   43453 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined IP address 192.168.39.13 and MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:01.656788   43453 main.go:141] libmachine: (functional-965692) Calling .GetSSHPort
I0807 18:27:01.657023   43453 main.go:141] libmachine: (functional-965692) Calling .GetSSHKeyPath
I0807 18:27:01.657197   43453 main.go:141] libmachine: (functional-965692) Calling .GetSSHUsername
I0807 18:27:01.657354   43453 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/functional-965692/id_rsa Username:docker}
I0807 18:27:01.756343   43453 ssh_runner.go:195] Run: sudo crictl images --output json
I0807 18:27:01.818116   43453 main.go:141] libmachine: Making call to close driver server
I0807 18:27:01.818134   43453 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:01.818416   43453 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:01.818434   43453 main.go:141] libmachine: Making call to close connection to plugin binary
I0807 18:27:01.818442   43453 main.go:141] libmachine: Making call to close driver server
I0807 18:27:01.818449   43453 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:01.818456   43453 main.go:141] libmachine: (functional-965692) DBG | Closing plugin on server side
I0807 18:27:01.818660   43453 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:01.818674   43453 main.go:141] libmachine: Making call to close connection to plugin binary
I0807 18:27:01.818695   43453 main.go:141] libmachine: (functional-965692) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 ssh pgrep buildkitd: exit status 1 (188.863638ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image build -t localhost/my-image:functional-965692 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 image build -t localhost/my-image:functional-965692 testdata/build --alsologtostderr: (3.493643664s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-965692 image build -t localhost/my-image:functional-965692 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8b088c63c00
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-965692
--> 68d92e48147
Successfully tagged localhost/my-image:functional-965692
68d92e48147ff1e59728549f9f37aac6d8a041e3cd6a966f86c870fa17dd9833
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-965692 image build -t localhost/my-image:functional-965692 testdata/build --alsologtostderr:
I0807 18:27:02.052701   43507 out.go:291] Setting OutFile to fd 1 ...
I0807 18:27:02.052971   43507 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:02.052987   43507 out.go:304] Setting ErrFile to fd 2...
I0807 18:27:02.052997   43507 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0807 18:27:02.053182   43507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
I0807 18:27:02.053728   43507 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:02.054307   43507 config.go:182] Loaded profile config "functional-965692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0807 18:27:02.054667   43507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:02.054708   43507 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:02.070501   43507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44969
I0807 18:27:02.070963   43507 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:02.071433   43507 main.go:141] libmachine: Using API Version  1
I0807 18:27:02.071454   43507 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:02.071846   43507 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:02.072114   43507 main.go:141] libmachine: (functional-965692) Calling .GetState
I0807 18:27:02.073976   43507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0807 18:27:02.074014   43507 main.go:141] libmachine: Launching plugin server for driver kvm2
I0807 18:27:02.088910   43507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
I0807 18:27:02.089389   43507 main.go:141] libmachine: () Calling .GetVersion
I0807 18:27:02.089899   43507 main.go:141] libmachine: Using API Version  1
I0807 18:27:02.089918   43507 main.go:141] libmachine: () Calling .SetConfigRaw
I0807 18:27:02.090203   43507 main.go:141] libmachine: () Calling .GetMachineName
I0807 18:27:02.090399   43507 main.go:141] libmachine: (functional-965692) Calling .DriverName
I0807 18:27:02.090614   43507 ssh_runner.go:195] Run: systemctl --version
I0807 18:27:02.090640   43507 main.go:141] libmachine: (functional-965692) Calling .GetSSHHostname
I0807 18:27:02.093411   43507 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:02.093770   43507 main.go:141] libmachine: (functional-965692) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:ff:90", ip: ""} in network mk-functional-965692: {Iface:virbr1 ExpiryTime:2024-08-07 19:18:09 +0000 UTC Type:0 Mac:52:54:00:74:ff:90 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-965692 Clientid:01:52:54:00:74:ff:90}
I0807 18:27:02.093799   43507 main.go:141] libmachine: (functional-965692) DBG | domain functional-965692 has defined IP address 192.168.39.13 and MAC address 52:54:00:74:ff:90 in network mk-functional-965692
I0807 18:27:02.093938   43507 main.go:141] libmachine: (functional-965692) Calling .GetSSHPort
I0807 18:27:02.094112   43507 main.go:141] libmachine: (functional-965692) Calling .GetSSHKeyPath
I0807 18:27:02.094248   43507 main.go:141] libmachine: (functional-965692) Calling .GetSSHUsername
I0807 18:27:02.094378   43507 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/functional-965692/id_rsa Username:docker}
I0807 18:27:02.192580   43507 build_images.go:161] Building image from path: /tmp/build.2847363098.tar
I0807 18:27:02.192695   43507 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0807 18:27:02.207162   43507 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2847363098.tar
I0807 18:27:02.214429   43507 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2847363098.tar: stat -c "%s %y" /var/lib/minikube/build/build.2847363098.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2847363098.tar': No such file or directory
I0807 18:27:02.214466   43507 ssh_runner.go:362] scp /tmp/build.2847363098.tar --> /var/lib/minikube/build/build.2847363098.tar (3072 bytes)
I0807 18:27:02.243329   43507 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2847363098
I0807 18:27:02.255830   43507 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2847363098 -xf /var/lib/minikube/build/build.2847363098.tar
I0807 18:27:02.267245   43507 crio.go:315] Building image: /var/lib/minikube/build/build.2847363098
I0807 18:27:02.267339   43507 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-965692 /var/lib/minikube/build/build.2847363098 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0807 18:27:05.448939   43507 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-965692 /var/lib/minikube/build/build.2847363098 --cgroup-manager=cgroupfs: (3.181570882s)
I0807 18:27:05.449007   43507 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2847363098
I0807 18:27:05.481931   43507 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2847363098.tar
I0807 18:27:05.500143   43507 build_images.go:217] Built localhost/my-image:functional-965692 from /tmp/build.2847363098.tar
I0807 18:27:05.500175   43507 build_images.go:133] succeeded building to: functional-965692
I0807 18:27:05.500179   43507 build_images.go:134] failed building to: 
I0807 18:27:05.500255   43507 main.go:141] libmachine: Making call to close driver server
I0807 18:27:05.500272   43507 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:05.500550   43507 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:05.500568   43507 main.go:141] libmachine: Making call to close connection to plugin binary
I0807 18:27:05.500576   43507 main.go:141] libmachine: Making call to close driver server
I0807 18:27:05.500583   43507 main.go:141] libmachine: (functional-965692) Calling .Close
I0807 18:27:05.500779   43507 main.go:141] libmachine: Successfully made call to close driver server
I0807 18:27:05.500792   43507 main.go:141] libmachine: Making call to close connection to plugin binary
I0807 18:27:05.500817   43507 main.go:141] libmachine: (functional-965692) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.973072027s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-965692
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image load --daemon docker.io/kicbase/echo-server:functional-965692 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 image load --daemon docker.io/kicbase/echo-server:functional-965692 --alsologtostderr: (1.437158756s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image load --daemon docker.io/kicbase/echo-server:functional-965692 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-965692
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image load --daemon docker.io/kicbase/echo-server:functional-965692 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image save docker.io/kicbase/echo-server:functional-965692 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image rm docker.io/kicbase/echo-server:functional-965692 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 service list -o json
functional_test.go:1490: Took "346.759276ms" to run "out/minikube-linux-amd64 -p functional-965692 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.13:32717
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.13:32717
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-965692
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 image save --daemon docker.io/kicbase/echo-server:functional-965692 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-965692 image save --daemon docker.io/kicbase/echo-server:functional-965692 --alsologtostderr: (2.038920521s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-965692
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "214.259017ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "46.104142ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "216.873323ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "45.485927ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdany-port486218888/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723055212041935942" to /tmp/TestFunctionalparallelMountCmdany-port486218888/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723055212041935942" to /tmp/TestFunctionalparallelMountCmdany-port486218888/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723055212041935942" to /tmp/TestFunctionalparallelMountCmdany-port486218888/001/test-1723055212041935942
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (194.162514ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  7 18:26 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  7 18:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  7 18:26 test-1723055212041935942
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh cat /mount-9p/test-1723055212041935942
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-965692 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [08887ea2-ca49-4cc5-bf51-fc70bfbb676a] Pending
helpers_test.go:344: "busybox-mount" [08887ea2-ca49-4cc5-bf51-fc70bfbb676a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [08887ea2-ca49-4cc5-bf51-fc70bfbb676a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [08887ea2-ca49-4cc5-bf51-fc70bfbb676a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.006604872s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-965692 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdany-port486218888/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdspecific-port3188235096/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (267.807376ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdspecific-port3188235096/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 ssh "sudo umount -f /mount-9p": exit status 1 (267.057846ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-965692 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdspecific-port3188235096/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775289524/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775289524/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775289524/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T" /mount1: exit status 1 (314.743439ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-965692 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-965692 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775289524/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775289524/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-965692 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1775289524/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/08/07 18:27:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-965692
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-965692
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-965692
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (273.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-198246 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0807 18:31:31.076809   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:31.082719   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:31.093080   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:31.113740   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:31.154107   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:31.234506   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:31.394643   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:31.715385   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:32.356516   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:33.637430   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:36.198450   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:41.318693   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:31:51.559163   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-198246 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m33.248382428s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (273.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-198246 -- rollout status deployment/busybox: (4.341954558s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-8g62d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-chh26 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-k2t25 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-8g62d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-chh26 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-k2t25 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-8g62d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-chh26 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-k2t25 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-8g62d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-8g62d -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-chh26 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-chh26 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-k2t25 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-198246 -- exec busybox-fc5497c4f-k2t25 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (84.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-198246 -v=7 --alsologtostderr
E0807 18:32:12.040069   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:32:53.001310   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-198246 -v=7 --alsologtostderr: (1m23.948776537s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (84.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-198246 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp testdata/cp-test.txt ha-198246:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4028937378/001/cp-test_ha-198246.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246:/home/docker/cp-test.txt ha-198246-m02:/home/docker/cp-test_ha-198246_ha-198246-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m02 "sudo cat /home/docker/cp-test_ha-198246_ha-198246-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246:/home/docker/cp-test.txt ha-198246-m03:/home/docker/cp-test_ha-198246_ha-198246-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m03 "sudo cat /home/docker/cp-test_ha-198246_ha-198246-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246:/home/docker/cp-test.txt ha-198246-m04:/home/docker/cp-test_ha-198246_ha-198246-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m04 "sudo cat /home/docker/cp-test_ha-198246_ha-198246-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp testdata/cp-test.txt ha-198246-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4028937378/001/cp-test_ha-198246-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m02:/home/docker/cp-test.txt ha-198246:/home/docker/cp-test_ha-198246-m02_ha-198246.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246 "sudo cat /home/docker/cp-test_ha-198246-m02_ha-198246.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m02:/home/docker/cp-test.txt ha-198246-m03:/home/docker/cp-test_ha-198246-m02_ha-198246-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m03 "sudo cat /home/docker/cp-test_ha-198246-m02_ha-198246-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m02:/home/docker/cp-test.txt ha-198246-m04:/home/docker/cp-test_ha-198246-m02_ha-198246-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m04 "sudo cat /home/docker/cp-test_ha-198246-m02_ha-198246-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp testdata/cp-test.txt ha-198246-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4028937378/001/cp-test_ha-198246-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt ha-198246:/home/docker/cp-test_ha-198246-m03_ha-198246.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246 "sudo cat /home/docker/cp-test_ha-198246-m03_ha-198246.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt ha-198246-m02:/home/docker/cp-test_ha-198246-m03_ha-198246-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m02 "sudo cat /home/docker/cp-test_ha-198246-m03_ha-198246-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m03:/home/docker/cp-test.txt ha-198246-m04:/home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m04 "sudo cat /home/docker/cp-test_ha-198246-m03_ha-198246-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp testdata/cp-test.txt ha-198246-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4028937378/001/cp-test_ha-198246-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt ha-198246:/home/docker/cp-test_ha-198246-m04_ha-198246.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246 "sudo cat /home/docker/cp-test_ha-198246-m04_ha-198246.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt ha-198246-m02:/home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m02 "sudo cat /home/docker/cp-test_ha-198246-m04_ha-198246-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 cp ha-198246-m04:/home/docker/cp-test.txt ha-198246-m03:/home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 ssh -n ha-198246-m03 "sudo cat /home/docker/cp-test_ha-198246-m04_ha-198246-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.502673209s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-198246 node delete m03 -v=7 --alsologtostderr: (16.388817511s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (351.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-198246 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0807 18:46:31.079955   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:47:54.125332   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 18:51:31.076262   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-198246 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m51.032042647s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (351.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-198246 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-198246 --control-plane -v=7 --alsologtostderr: (1m21.438099725s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-198246 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-297279 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-297279 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.598335737s)
--- PASS: TestJSONOutput/start/Command (58.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-297279 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-297279 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-297279 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-297279 --output=json --user=testUser: (7.324022978s)
--- PASS: TestJSONOutput/stop/Command (7.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-918099 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-918099 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.98072ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"49625226-6705-4310-b89b-c92b4c125c0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-918099] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a5d7698f-b5ba-4591-b8f8-c2ac70850bd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19389"}}
	{"specversion":"1.0","id":"6bedf9ef-c7a2-4c75-aa13-f27abb029c30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"83a3aae6-af1b-4d26-b7ac-c3906cd89329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig"}}
	{"specversion":"1.0","id":"68185c9f-5101-4c23-983e-72a16115213c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube"}}
	{"specversion":"1.0","id":"efe4a406-23bf-4e8e-b095-6522b3f60005","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8b8ba511-0987-4821-b83a-7f4ba972bc44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5f647838-de34-40a0-82b7-23aab7508731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-918099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-918099
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-490711 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-490711 --driver=kvm2  --container-runtime=crio: (44.274984837s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-493254 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-493254 --driver=kvm2  --container-runtime=crio: (42.262297885s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-490711
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-493254
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-493254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-493254
helpers_test.go:175: Cleaning up "first-490711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-490711
--- PASS: TestMinikubeProfile (89.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-214288 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-214288 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.335344206s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-214288 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-214288 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-230438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0807 18:56:31.079716   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-230438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.588105004s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-230438 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-230438 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-214288 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-230438 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-230438 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-230438
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-230438: (1.268957906s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.51s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-230438
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-230438: (22.511398435s)
--- PASS: TestMountStart/serial/RestartStopped (23.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-230438 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-230438 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334028 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334028 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m5.181714868s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-334028 -- rollout status deployment/busybox: (4.092146911s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-v64x9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-vlwmp -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-v64x9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-vlwmp -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-v64x9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-vlwmp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-v64x9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-v64x9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-vlwmp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-334028 -- exec busybox-fc5497c4f-vlwmp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-334028 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-334028 -v 3 --alsologtostderr: (54.376444575s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-334028 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp testdata/cp-test.txt multinode-334028:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp multinode-334028:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1317190128/001/cp-test_multinode-334028.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp multinode-334028:/home/docker/cp-test.txt multinode-334028-m02:/home/docker/cp-test_multinode-334028_multinode-334028-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m02 "sudo cat /home/docker/cp-test_multinode-334028_multinode-334028-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp multinode-334028:/home/docker/cp-test.txt multinode-334028-m03:/home/docker/cp-test_multinode-334028_multinode-334028-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m03 "sudo cat /home/docker/cp-test_multinode-334028_multinode-334028-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp testdata/cp-test.txt multinode-334028-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp multinode-334028-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1317190128/001/cp-test_multinode-334028-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp multinode-334028-m02:/home/docker/cp-test.txt multinode-334028:/home/docker/cp-test_multinode-334028-m02_multinode-334028.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028 "sudo cat /home/docker/cp-test_multinode-334028-m02_multinode-334028.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp multinode-334028-m02:/home/docker/cp-test.txt multinode-334028-m03:/home/docker/cp-test_multinode-334028-m02_multinode-334028-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m03 "sudo cat /home/docker/cp-test_multinode-334028-m02_multinode-334028-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp testdata/cp-test.txt multinode-334028-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp multinode-334028-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1317190128/001/cp-test_multinode-334028-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp multinode-334028-m03:/home/docker/cp-test.txt multinode-334028:/home/docker/cp-test_multinode-334028-m03_multinode-334028.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028 "sudo cat /home/docker/cp-test_multinode-334028-m03_multinode-334028.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 cp multinode-334028-m03:/home/docker/cp-test.txt multinode-334028-m02:/home/docker/cp-test_multinode-334028-m03_multinode-334028-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 ssh -n multinode-334028-m02 "sudo cat /home/docker/cp-test_multinode-334028-m03_multinode-334028-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-334028 node stop m03: (1.467937972s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334028 status: exit status 7 (426.315599ms)

                                                
                                                
-- stdout --
	multinode-334028
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-334028-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-334028-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-334028 status --alsologtostderr: exit status 7 (413.312874ms)

                                                
                                                
-- stdout --
	multinode-334028
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-334028-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-334028-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0807 19:00:31.166921   61659 out.go:291] Setting OutFile to fd 1 ...
	I0807 19:00:31.167041   61659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:00:31.167049   61659 out.go:304] Setting ErrFile to fd 2...
	I0807 19:00:31.167053   61659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0807 19:00:31.167231   61659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19389-20864/.minikube/bin
	I0807 19:00:31.167376   61659 out.go:298] Setting JSON to false
	I0807 19:00:31.167397   61659 mustload.go:65] Loading cluster: multinode-334028
	I0807 19:00:31.167486   61659 notify.go:220] Checking for updates...
	I0807 19:00:31.167733   61659 config.go:182] Loaded profile config "multinode-334028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0807 19:00:31.167748   61659 status.go:255] checking status of multinode-334028 ...
	I0807 19:00:31.168171   61659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:00:31.168300   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:00:31.188074   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0807 19:00:31.188463   61659 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:00:31.189074   61659 main.go:141] libmachine: Using API Version  1
	I0807 19:00:31.189103   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:00:31.189405   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:00:31.189582   61659 main.go:141] libmachine: (multinode-334028) Calling .GetState
	I0807 19:00:31.191036   61659 status.go:330] multinode-334028 host status = "Running" (err=<nil>)
	I0807 19:00:31.191049   61659 host.go:66] Checking if "multinode-334028" exists ...
	I0807 19:00:31.191345   61659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:00:31.191401   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:00:31.206450   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0807 19:00:31.206772   61659 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:00:31.207175   61659 main.go:141] libmachine: Using API Version  1
	I0807 19:00:31.207200   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:00:31.207503   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:00:31.207660   61659 main.go:141] libmachine: (multinode-334028) Calling .GetIP
	I0807 19:00:31.210349   61659 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:00:31.210753   61659 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:00:31.210794   61659 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:00:31.210897   61659 host.go:66] Checking if "multinode-334028" exists ...
	I0807 19:00:31.211175   61659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:00:31.211206   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:00:31.225910   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0807 19:00:31.226296   61659 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:00:31.226743   61659 main.go:141] libmachine: Using API Version  1
	I0807 19:00:31.226764   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:00:31.227069   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:00:31.227245   61659 main.go:141] libmachine: (multinode-334028) Calling .DriverName
	I0807 19:00:31.227425   61659 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 19:00:31.227443   61659 main.go:141] libmachine: (multinode-334028) Calling .GetSSHHostname
	I0807 19:00:31.230139   61659 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:00:31.230528   61659 main.go:141] libmachine: (multinode-334028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:cf:b6", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:57:29 +0000 UTC Type:0 Mac:52:54:00:f6:cf:b6 Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:multinode-334028 Clientid:01:52:54:00:f6:cf:b6}
	I0807 19:00:31.230560   61659 main.go:141] libmachine: (multinode-334028) DBG | domain multinode-334028 has defined IP address 192.168.39.165 and MAC address 52:54:00:f6:cf:b6 in network mk-multinode-334028
	I0807 19:00:31.230716   61659 main.go:141] libmachine: (multinode-334028) Calling .GetSSHPort
	I0807 19:00:31.230855   61659 main.go:141] libmachine: (multinode-334028) Calling .GetSSHKeyPath
	I0807 19:00:31.230982   61659 main.go:141] libmachine: (multinode-334028) Calling .GetSSHUsername
	I0807 19:00:31.231108   61659 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028/id_rsa Username:docker}
	I0807 19:00:31.311831   61659 ssh_runner.go:195] Run: systemctl --version
	I0807 19:00:31.318064   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:00:31.332556   61659 kubeconfig.go:125] found "multinode-334028" server: "https://192.168.39.165:8443"
	I0807 19:00:31.332580   61659 api_server.go:166] Checking apiserver status ...
	I0807 19:00:31.332612   61659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0807 19:00:31.348478   61659 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0807 19:00:31.358359   61659 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0807 19:00:31.358443   61659 ssh_runner.go:195] Run: ls
	I0807 19:00:31.363145   61659 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0807 19:00:31.367402   61659 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0807 19:00:31.367430   61659 status.go:422] multinode-334028 apiserver status = Running (err=<nil>)
	I0807 19:00:31.367441   61659 status.go:257] multinode-334028 status: &{Name:multinode-334028 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0807 19:00:31.367455   61659 status.go:255] checking status of multinode-334028-m02 ...
	I0807 19:00:31.367799   61659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:00:31.367832   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:00:31.383731   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42393
	I0807 19:00:31.384175   61659 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:00:31.384660   61659 main.go:141] libmachine: Using API Version  1
	I0807 19:00:31.384694   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:00:31.385044   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:00:31.385237   61659 main.go:141] libmachine: (multinode-334028-m02) Calling .GetState
	I0807 19:00:31.386587   61659 status.go:330] multinode-334028-m02 host status = "Running" (err=<nil>)
	I0807 19:00:31.386600   61659 host.go:66] Checking if "multinode-334028-m02" exists ...
	I0807 19:00:31.386939   61659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:00:31.386978   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:00:31.401739   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37577
	I0807 19:00:31.402175   61659 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:00:31.402601   61659 main.go:141] libmachine: Using API Version  1
	I0807 19:00:31.402622   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:00:31.402946   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:00:31.403140   61659 main.go:141] libmachine: (multinode-334028-m02) Calling .GetIP
	I0807 19:00:31.405907   61659 main.go:141] libmachine: (multinode-334028-m02) DBG | domain multinode-334028-m02 has defined MAC address 52:54:00:ad:47:e9 in network mk-multinode-334028
	I0807 19:00:31.406306   61659 main.go:141] libmachine: (multinode-334028-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:47:e9", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:58:42 +0000 UTC Type:0 Mac:52:54:00:ad:47:e9 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-334028-m02 Clientid:01:52:54:00:ad:47:e9}
	I0807 19:00:31.406334   61659 main.go:141] libmachine: (multinode-334028-m02) DBG | domain multinode-334028-m02 has defined IP address 192.168.39.119 and MAC address 52:54:00:ad:47:e9 in network mk-multinode-334028
	I0807 19:00:31.406442   61659 host.go:66] Checking if "multinode-334028-m02" exists ...
	I0807 19:00:31.406737   61659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:00:31.406770   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:00:31.421932   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0807 19:00:31.422318   61659 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:00:31.422712   61659 main.go:141] libmachine: Using API Version  1
	I0807 19:00:31.422746   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:00:31.423044   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:00:31.423258   61659 main.go:141] libmachine: (multinode-334028-m02) Calling .DriverName
	I0807 19:00:31.423405   61659 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0807 19:00:31.423428   61659 main.go:141] libmachine: (multinode-334028-m02) Calling .GetSSHHostname
	I0807 19:00:31.426011   61659 main.go:141] libmachine: (multinode-334028-m02) DBG | domain multinode-334028-m02 has defined MAC address 52:54:00:ad:47:e9 in network mk-multinode-334028
	I0807 19:00:31.426414   61659 main.go:141] libmachine: (multinode-334028-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:47:e9", ip: ""} in network mk-multinode-334028: {Iface:virbr1 ExpiryTime:2024-08-07 19:58:42 +0000 UTC Type:0 Mac:52:54:00:ad:47:e9 Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:multinode-334028-m02 Clientid:01:52:54:00:ad:47:e9}
	I0807 19:00:31.426447   61659 main.go:141] libmachine: (multinode-334028-m02) DBG | domain multinode-334028-m02 has defined IP address 192.168.39.119 and MAC address 52:54:00:ad:47:e9 in network mk-multinode-334028
	I0807 19:00:31.426548   61659 main.go:141] libmachine: (multinode-334028-m02) Calling .GetSSHPort
	I0807 19:00:31.426702   61659 main.go:141] libmachine: (multinode-334028-m02) Calling .GetSSHKeyPath
	I0807 19:00:31.426853   61659 main.go:141] libmachine: (multinode-334028-m02) Calling .GetSSHUsername
	I0807 19:00:31.427031   61659 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19389-20864/.minikube/machines/multinode-334028-m02/id_rsa Username:docker}
	I0807 19:00:31.507208   61659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0807 19:00:31.521880   61659 status.go:257] multinode-334028-m02 status: &{Name:multinode-334028-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0807 19:00:31.521925   61659 status.go:255] checking status of multinode-334028-m03 ...
	I0807 19:00:31.522331   61659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0807 19:00:31.522375   61659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0807 19:00:31.537507   61659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
	I0807 19:00:31.537897   61659 main.go:141] libmachine: () Calling .GetVersion
	I0807 19:00:31.538333   61659 main.go:141] libmachine: Using API Version  1
	I0807 19:00:31.538358   61659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0807 19:00:31.538725   61659 main.go:141] libmachine: () Calling .GetMachineName
	I0807 19:00:31.538939   61659 main.go:141] libmachine: (multinode-334028-m03) Calling .GetState
	I0807 19:00:31.540424   61659 status.go:330] multinode-334028-m03 host status = "Stopped" (err=<nil>)
	I0807 19:00:31.540440   61659 status.go:343] host is not running, skipping remaining checks
	I0807 19:00:31.540448   61659 status.go:257] multinode-334028-m03 status: &{Name:multinode-334028-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-334028 node start m03 -v=7 --alsologtostderr: (38.976846467s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-334028 node delete m03: (1.831740754s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334028 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0807 19:11:31.076729   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334028 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.359301744s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-334028 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.89s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-334028
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334028-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-334028-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.418635ms)

                                                
                                                
-- stdout --
	* [multinode-334028-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-334028-m02' is duplicated with machine name 'multinode-334028-m02' in profile 'multinode-334028'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-334028-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-334028-m03 --driver=kvm2  --container-runtime=crio: (43.413806116s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-334028
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-334028: exit status 80 (210.030871ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-334028 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-334028-m03 already exists in multinode-334028-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-334028-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.49s)

                                                
                                    
x
+
TestScheduledStopUnix (111.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-646830 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-646830 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.325466668s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646830 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-646830 -n scheduled-stop-646830
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646830 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646830 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-646830 -n scheduled-stop-646830
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-646830
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-646830 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-646830
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-646830: exit status 7 (64.502719ms)

                                                
                                                
-- stdout --
	scheduled-stop-646830
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-646830 -n scheduled-stop-646830
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-646830 -n scheduled-stop-646830: exit status 7 (62.300479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-646830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-646830
--- PASS: TestScheduledStopUnix (111.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (220.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4231071871 start -p running-upgrade-252907 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0807 19:21:14.127933   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
E0807 19:21:31.076630   28052 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19389-20864/.minikube/profiles/functional-965692/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4231071871 start -p running-upgrade-252907 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m3.461130502s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-252907 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-252907 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m33.533923082s)
helpers_test.go:175: Cleaning up "running-upgrade-252907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-252907
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-252907: (1.143023493s)
--- PASS: TestRunningBinaryUpgrade (220.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-160192 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-160192 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (74.298872ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-160192] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19389
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19389-20864/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19389-20864/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-160192 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-160192 --driver=kvm2  --container-runtime=crio: (1m37.080133993s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-160192 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-160192 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-160192 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.445102522s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-160192 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-160192 status -o json: exit status 2 (240.83208ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-160192","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-160192
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-160192: (1.012886655s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (98.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4238847518 start -p stopped-upgrade-043880 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4238847518 start -p stopped-upgrade-043880 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (52.542260027s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4238847518 -p stopped-upgrade-043880 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4238847518 -p stopped-upgrade-043880 stop: (2.144989286s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-043880 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-043880 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.890014261s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (98.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-160192 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-160192 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.2419194s)
--- PASS: TestNoKubernetes/serial/Start (30.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-160192 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-160192 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.202647ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.028955373s)
--- PASS: TestNoKubernetes/serial/ProfileList (1.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-160192
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-160192: (1.288273365s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-160192 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-160192 --driver=kvm2  --container-runtime=crio: (39.235011131s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-160192 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-160192 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.322119ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-043880
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestPause/serial/Start (63.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-302295 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-302295 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m3.448769942s)
--- PASS: TestPause/serial/Start (63.45s)

                                                
                                    

Test skip (35/215)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
167 TestImageBuild 0
194 TestKicCustomNetwork 0
195 TestKicExistingNetwork 0
196 TestKicCustomSubnet 0
197 TestKicStaticIP 0
229 TestChangeNoneUser 0
232 TestScheduledStopWindows 0
234 TestSkaffold 0
236 TestInsufficientStorage 0
240 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard