Test Report: KVM_Linux_crio 19364

                    
                      663d17776bbce0b1e831c154f8973876d77c5fd1:2024-08-04:35636
                    
                

Test fail (11/215)

x
+
TestAddons/Setup (2400.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-474272 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-474272 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.952417532s)

                                                
                                                
-- stdout --
	* [addons-474272] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-474272" primary control-plane node in "addons-474272" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	* Verifying ingress addon...
	* Verifying registry addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-474272 service yakd-dashboard -n yakd-dashboard
	
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-474272 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, helm-tiller, metrics-server, ingress-dns, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 00:43:07.344011   98453 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:43:07.344173   98453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:43:07.344188   98453 out.go:304] Setting ErrFile to fd 2...
	I0804 00:43:07.344192   98453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:43:07.344384   98453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 00:43:07.345055   98453 out.go:298] Setting JSON to false
	I0804 00:43:07.345988   98453 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8731,"bootTime":1722723456,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:43:07.346053   98453 start.go:139] virtualization: kvm guest
	I0804 00:43:07.348255   98453 out.go:177] * [addons-474272] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:43:07.349728   98453 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 00:43:07.349737   98453 notify.go:220] Checking for updates...
	I0804 00:43:07.352472   98453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:43:07.353990   98453 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 00:43:07.355642   98453 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 00:43:07.357011   98453 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 00:43:07.358377   98453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 00:43:07.360026   98453 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:43:07.392646   98453 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 00:43:07.393896   98453 start.go:297] selected driver: kvm2
	I0804 00:43:07.393915   98453 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:43:07.393929   98453 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 00:43:07.394728   98453 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:43:07.394803   98453 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:43:07.410152   98453 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:43:07.410210   98453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:43:07.410474   98453 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 00:43:07.410539   98453 cni.go:84] Creating CNI manager for ""
	I0804 00:43:07.410553   98453 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:43:07.410564   98453 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:43:07.410620   98453 start.go:340] cluster config:
	{Name:addons-474272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-474272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:43:07.410727   98453 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:43:07.412450   98453 out.go:177] * Starting "addons-474272" primary control-plane node in "addons-474272" cluster
	I0804 00:43:07.413727   98453 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:43:07.413773   98453 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:43:07.413781   98453 cache.go:56] Caching tarball of preloaded images
	I0804 00:43:07.413886   98453 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 00:43:07.413907   98453 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 00:43:07.414198   98453 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/config.json ...
	I0804 00:43:07.414218   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/config.json: {Name:mkc68cbc9c1cc90b3fdcb201590f30431a368287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:07.414350   98453 start.go:360] acquireMachinesLock for addons-474272: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 00:43:07.414405   98453 start.go:364] duration metric: took 41.94µs to acquireMachinesLock for "addons-474272"
	I0804 00:43:07.414429   98453 start.go:93] Provisioning new machine with config: &{Name:addons-474272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-474272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:43:07.414484   98453 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 00:43:07.416034   98453 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0804 00:43:07.416174   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:43:07.416216   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:43:07.430472   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38585
	I0804 00:43:07.430954   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:43:07.431565   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:43:07.431583   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:43:07.432009   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:43:07.432221   98453 main.go:141] libmachine: (addons-474272) Calling .GetMachineName
	I0804 00:43:07.432453   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:43:07.432625   98453 start.go:159] libmachine.API.Create for "addons-474272" (driver="kvm2")
	I0804 00:43:07.432656   98453 client.go:168] LocalClient.Create starting
	I0804 00:43:07.432697   98453 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem
	I0804 00:43:07.549156   98453 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem
	I0804 00:43:07.950295   98453 main.go:141] libmachine: Running pre-create checks...
	I0804 00:43:07.950325   98453 main.go:141] libmachine: (addons-474272) Calling .PreCreateCheck
	I0804 00:43:07.950888   98453 main.go:141] libmachine: (addons-474272) Calling .GetConfigRaw
	I0804 00:43:07.951394   98453 main.go:141] libmachine: Creating machine...
	I0804 00:43:07.951410   98453 main.go:141] libmachine: (addons-474272) Calling .Create
	I0804 00:43:07.951574   98453 main.go:141] libmachine: (addons-474272) Creating KVM machine...
	I0804 00:43:07.952801   98453 main.go:141] libmachine: (addons-474272) DBG | found existing default KVM network
	I0804 00:43:07.953582   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:07.953401   98476 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0804 00:43:07.953608   98453 main.go:141] libmachine: (addons-474272) DBG | created network xml: 
	I0804 00:43:07.953626   98453 main.go:141] libmachine: (addons-474272) DBG | <network>
	I0804 00:43:07.953634   98453 main.go:141] libmachine: (addons-474272) DBG |   <name>mk-addons-474272</name>
	I0804 00:43:07.953643   98453 main.go:141] libmachine: (addons-474272) DBG |   <dns enable='no'/>
	I0804 00:43:07.953651   98453 main.go:141] libmachine: (addons-474272) DBG |   
	I0804 00:43:07.953660   98453 main.go:141] libmachine: (addons-474272) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0804 00:43:07.953689   98453 main.go:141] libmachine: (addons-474272) DBG |     <dhcp>
	I0804 00:43:07.953761   98453 main.go:141] libmachine: (addons-474272) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0804 00:43:07.953795   98453 main.go:141] libmachine: (addons-474272) DBG |     </dhcp>
	I0804 00:43:07.953811   98453 main.go:141] libmachine: (addons-474272) DBG |   </ip>
	I0804 00:43:07.953823   98453 main.go:141] libmachine: (addons-474272) DBG |   
	I0804 00:43:07.953833   98453 main.go:141] libmachine: (addons-474272) DBG | </network>
	I0804 00:43:07.953839   98453 main.go:141] libmachine: (addons-474272) DBG | 
	I0804 00:43:07.959235   98453 main.go:141] libmachine: (addons-474272) DBG | trying to create private KVM network mk-addons-474272 192.168.39.0/24...
	I0804 00:43:08.027398   98453 main.go:141] libmachine: (addons-474272) Setting up store path in /home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272 ...
	I0804 00:43:08.027449   98453 main.go:141] libmachine: (addons-474272) DBG | private KVM network mk-addons-474272 192.168.39.0/24 created
	I0804 00:43:08.027470   98453 main.go:141] libmachine: (addons-474272) Building disk image from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:43:08.027496   98453 main.go:141] libmachine: (addons-474272) Downloading /home/jenkins/minikube-integration/19364-90243/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 00:43:08.027566   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:08.027277   98476 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 00:43:08.301893   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:08.301751   98476 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa...
	I0804 00:43:08.342265   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:08.342136   98476 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/addons-474272.rawdisk...
	I0804 00:43:08.342295   98453 main.go:141] libmachine: (addons-474272) DBG | Writing magic tar header
	I0804 00:43:08.342307   98453 main.go:141] libmachine: (addons-474272) DBG | Writing SSH key tar header
	I0804 00:43:08.342333   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:08.342269   98476 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272 ...
	I0804 00:43:08.342364   98453 main.go:141] libmachine: (addons-474272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272
	I0804 00:43:08.342487   98453 main.go:141] libmachine: (addons-474272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines
	I0804 00:43:08.342529   98453 main.go:141] libmachine: (addons-474272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 00:43:08.342560   98453 main.go:141] libmachine: (addons-474272) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272 (perms=drwx------)
	I0804 00:43:08.342573   98453 main.go:141] libmachine: (addons-474272) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines (perms=drwxr-xr-x)
	I0804 00:43:08.342580   98453 main.go:141] libmachine: (addons-474272) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube (perms=drwxr-xr-x)
	I0804 00:43:08.342588   98453 main.go:141] libmachine: (addons-474272) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243 (perms=drwxrwxr-x)
	I0804 00:43:08.342594   98453 main.go:141] libmachine: (addons-474272) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 00:43:08.342602   98453 main.go:141] libmachine: (addons-474272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243
	I0804 00:43:08.342611   98453 main.go:141] libmachine: (addons-474272) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 00:43:08.342622   98453 main.go:141] libmachine: (addons-474272) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 00:43:08.342638   98453 main.go:141] libmachine: (addons-474272) DBG | Checking permissions on dir: /home/jenkins
	I0804 00:43:08.342648   98453 main.go:141] libmachine: (addons-474272) DBG | Checking permissions on dir: /home
	I0804 00:43:08.342665   98453 main.go:141] libmachine: (addons-474272) DBG | Skipping /home - not owner
	I0804 00:43:08.342675   98453 main.go:141] libmachine: (addons-474272) Creating domain...
	I0804 00:43:08.343786   98453 main.go:141] libmachine: (addons-474272) define libvirt domain using xml: 
	I0804 00:43:08.343831   98453 main.go:141] libmachine: (addons-474272) <domain type='kvm'>
	I0804 00:43:08.343843   98453 main.go:141] libmachine: (addons-474272)   <name>addons-474272</name>
	I0804 00:43:08.343851   98453 main.go:141] libmachine: (addons-474272)   <memory unit='MiB'>4000</memory>
	I0804 00:43:08.343859   98453 main.go:141] libmachine: (addons-474272)   <vcpu>2</vcpu>
	I0804 00:43:08.343865   98453 main.go:141] libmachine: (addons-474272)   <features>
	I0804 00:43:08.343876   98453 main.go:141] libmachine: (addons-474272)     <acpi/>
	I0804 00:43:08.343883   98453 main.go:141] libmachine: (addons-474272)     <apic/>
	I0804 00:43:08.343889   98453 main.go:141] libmachine: (addons-474272)     <pae/>
	I0804 00:43:08.343901   98453 main.go:141] libmachine: (addons-474272)     
	I0804 00:43:08.343906   98453 main.go:141] libmachine: (addons-474272)   </features>
	I0804 00:43:08.343913   98453 main.go:141] libmachine: (addons-474272)   <cpu mode='host-passthrough'>
	I0804 00:43:08.343917   98453 main.go:141] libmachine: (addons-474272)   
	I0804 00:43:08.343927   98453 main.go:141] libmachine: (addons-474272)   </cpu>
	I0804 00:43:08.343957   98453 main.go:141] libmachine: (addons-474272)   <os>
	I0804 00:43:08.343982   98453 main.go:141] libmachine: (addons-474272)     <type>hvm</type>
	I0804 00:43:08.343995   98453 main.go:141] libmachine: (addons-474272)     <boot dev='cdrom'/>
	I0804 00:43:08.344006   98453 main.go:141] libmachine: (addons-474272)     <boot dev='hd'/>
	I0804 00:43:08.344019   98453 main.go:141] libmachine: (addons-474272)     <bootmenu enable='no'/>
	I0804 00:43:08.344028   98453 main.go:141] libmachine: (addons-474272)   </os>
	I0804 00:43:08.344039   98453 main.go:141] libmachine: (addons-474272)   <devices>
	I0804 00:43:08.344051   98453 main.go:141] libmachine: (addons-474272)     <disk type='file' device='cdrom'>
	I0804 00:43:08.344068   98453 main.go:141] libmachine: (addons-474272)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/boot2docker.iso'/>
	I0804 00:43:08.344079   98453 main.go:141] libmachine: (addons-474272)       <target dev='hdc' bus='scsi'/>
	I0804 00:43:08.344089   98453 main.go:141] libmachine: (addons-474272)       <readonly/>
	I0804 00:43:08.344096   98453 main.go:141] libmachine: (addons-474272)     </disk>
	I0804 00:43:08.344118   98453 main.go:141] libmachine: (addons-474272)     <disk type='file' device='disk'>
	I0804 00:43:08.344135   98453 main.go:141] libmachine: (addons-474272)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 00:43:08.344149   98453 main.go:141] libmachine: (addons-474272)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/addons-474272.rawdisk'/>
	I0804 00:43:08.344160   98453 main.go:141] libmachine: (addons-474272)       <target dev='hda' bus='virtio'/>
	I0804 00:43:08.344171   98453 main.go:141] libmachine: (addons-474272)     </disk>
	I0804 00:43:08.344184   98453 main.go:141] libmachine: (addons-474272)     <interface type='network'>
	I0804 00:43:08.344197   98453 main.go:141] libmachine: (addons-474272)       <source network='mk-addons-474272'/>
	I0804 00:43:08.344212   98453 main.go:141] libmachine: (addons-474272)       <model type='virtio'/>
	I0804 00:43:08.344224   98453 main.go:141] libmachine: (addons-474272)     </interface>
	I0804 00:43:08.344235   98453 main.go:141] libmachine: (addons-474272)     <interface type='network'>
	I0804 00:43:08.344248   98453 main.go:141] libmachine: (addons-474272)       <source network='default'/>
	I0804 00:43:08.344259   98453 main.go:141] libmachine: (addons-474272)       <model type='virtio'/>
	I0804 00:43:08.344270   98453 main.go:141] libmachine: (addons-474272)     </interface>
	I0804 00:43:08.344284   98453 main.go:141] libmachine: (addons-474272)     <serial type='pty'>
	I0804 00:43:08.344295   98453 main.go:141] libmachine: (addons-474272)       <target port='0'/>
	I0804 00:43:08.344305   98453 main.go:141] libmachine: (addons-474272)     </serial>
	I0804 00:43:08.344320   98453 main.go:141] libmachine: (addons-474272)     <console type='pty'>
	I0804 00:43:08.344340   98453 main.go:141] libmachine: (addons-474272)       <target type='serial' port='0'/>
	I0804 00:43:08.344362   98453 main.go:141] libmachine: (addons-474272)     </console>
	I0804 00:43:08.344379   98453 main.go:141] libmachine: (addons-474272)     <rng model='virtio'>
	I0804 00:43:08.344388   98453 main.go:141] libmachine: (addons-474272)       <backend model='random'>/dev/random</backend>
	I0804 00:43:08.344394   98453 main.go:141] libmachine: (addons-474272)     </rng>
	I0804 00:43:08.344398   98453 main.go:141] libmachine: (addons-474272)     
	I0804 00:43:08.344404   98453 main.go:141] libmachine: (addons-474272)     
	I0804 00:43:08.344409   98453 main.go:141] libmachine: (addons-474272)   </devices>
	I0804 00:43:08.344416   98453 main.go:141] libmachine: (addons-474272) </domain>
	I0804 00:43:08.344423   98453 main.go:141] libmachine: (addons-474272) 
	I0804 00:43:08.349113   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:27:65:4b in network default
	I0804 00:43:08.349820   98453 main.go:141] libmachine: (addons-474272) Ensuring networks are active...
	I0804 00:43:08.349848   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:08.350548   98453 main.go:141] libmachine: (addons-474272) Ensuring network default is active
	I0804 00:43:08.350962   98453 main.go:141] libmachine: (addons-474272) Ensuring network mk-addons-474272 is active
	I0804 00:43:08.351529   98453 main.go:141] libmachine: (addons-474272) Getting domain xml...
	I0804 00:43:08.352229   98453 main.go:141] libmachine: (addons-474272) Creating domain...
	I0804 00:43:09.559335   98453 main.go:141] libmachine: (addons-474272) Waiting to get IP...
	I0804 00:43:09.560033   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:09.560417   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:09.560450   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:09.560385   98476 retry.go:31] will retry after 202.182315ms: waiting for machine to come up
	I0804 00:43:09.763832   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:09.764295   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:09.764321   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:09.764252   98476 retry.go:31] will retry after 261.12002ms: waiting for machine to come up
	I0804 00:43:10.026381   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:10.026765   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:10.026789   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:10.026718   98476 retry.go:31] will retry after 382.006873ms: waiting for machine to come up
	I0804 00:43:10.410281   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:10.410689   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:10.410721   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:10.410631   98476 retry.go:31] will retry after 582.056682ms: waiting for machine to come up
	I0804 00:43:10.994434   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:10.994810   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:10.994838   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:10.994757   98476 retry.go:31] will retry after 582.597507ms: waiting for machine to come up
	I0804 00:43:11.578570   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:11.579020   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:11.579049   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:11.578967   98476 retry.go:31] will retry after 818.057847ms: waiting for machine to come up
	I0804 00:43:12.398914   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:12.399370   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:12.399398   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:12.399310   98476 retry.go:31] will retry after 1.003135936s: waiting for machine to come up
	I0804 00:43:13.404100   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:13.404599   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:13.404627   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:13.404541   98476 retry.go:31] will retry after 1.140236669s: waiting for machine to come up
	I0804 00:43:14.546059   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:14.546438   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:14.546473   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:14.546388   98476 retry.go:31] will retry after 1.680504772s: waiting for machine to come up
	I0804 00:43:16.228158   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:16.228639   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:16.228666   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:16.228600   98476 retry.go:31] will retry after 2.28328045s: waiting for machine to come up
	I0804 00:43:18.513752   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:18.514261   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:18.514281   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:18.514217   98476 retry.go:31] will retry after 2.730862253s: waiting for machine to come up
	I0804 00:43:21.246426   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:21.246861   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:21.246886   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:21.246819   98476 retry.go:31] will retry after 2.366317204s: waiting for machine to come up
	I0804 00:43:23.614351   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:23.614799   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:23.614823   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:23.614744   98476 retry.go:31] will retry after 3.614618703s: waiting for machine to come up
	I0804 00:43:27.231288   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:27.231766   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find current IP address of domain addons-474272 in network mk-addons-474272
	I0804 00:43:27.231796   98453 main.go:141] libmachine: (addons-474272) DBG | I0804 00:43:27.231713   98476 retry.go:31] will retry after 4.857791377s: waiting for machine to come up
	I0804 00:43:32.094852   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.095192   98453 main.go:141] libmachine: (addons-474272) Found IP for machine: 192.168.39.127
	I0804 00:43:32.095226   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has current primary IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.095255   98453 main.go:141] libmachine: (addons-474272) Reserving static IP address...
	I0804 00:43:32.095538   98453 main.go:141] libmachine: (addons-474272) DBG | unable to find host DHCP lease matching {name: "addons-474272", mac: "52:54:00:a6:d9:6e", ip: "192.168.39.127"} in network mk-addons-474272
	I0804 00:43:32.167206   98453 main.go:141] libmachine: (addons-474272) DBG | Getting to WaitForSSH function...
	I0804 00:43:32.167236   98453 main.go:141] libmachine: (addons-474272) Reserved static IP address: 192.168.39.127
	I0804 00:43:32.167251   98453 main.go:141] libmachine: (addons-474272) Waiting for SSH to be available...
	I0804 00:43:32.170325   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.170839   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:32.170877   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.170993   98453 main.go:141] libmachine: (addons-474272) DBG | Using SSH client type: external
	I0804 00:43:32.171021   98453 main.go:141] libmachine: (addons-474272) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa (-rw-------)
	I0804 00:43:32.171066   98453 main.go:141] libmachine: (addons-474272) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 00:43:32.171089   98453 main.go:141] libmachine: (addons-474272) DBG | About to run SSH command:
	I0804 00:43:32.171102   98453 main.go:141] libmachine: (addons-474272) DBG | exit 0
	I0804 00:43:32.293802   98453 main.go:141] libmachine: (addons-474272) DBG | SSH cmd err, output: <nil>: 
	I0804 00:43:32.294167   98453 main.go:141] libmachine: (addons-474272) KVM machine creation complete!
	I0804 00:43:32.294504   98453 main.go:141] libmachine: (addons-474272) Calling .GetConfigRaw
	I0804 00:43:32.295095   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:43:32.295272   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:43:32.295418   98453 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 00:43:32.295430   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:43:32.296937   98453 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 00:43:32.296950   98453 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 00:43:32.296957   98453 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 00:43:32.296965   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:32.299228   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.299602   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:32.299627   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.299813   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:32.299979   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:32.300131   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:32.300254   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:32.300401   98453 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:32.300608   98453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0804 00:43:32.300622   98453 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 00:43:32.396673   98453 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:43:32.396699   98453 main.go:141] libmachine: Detecting the provisioner...
	I0804 00:43:32.396710   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:32.399644   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.400036   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:32.400067   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.400193   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:32.400409   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:32.400563   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:32.400709   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:32.400853   98453 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:32.401010   98453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0804 00:43:32.401020   98453 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 00:43:32.498213   98453 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 00:43:32.498293   98453 main.go:141] libmachine: found compatible host: buildroot
	I0804 00:43:32.498304   98453 main.go:141] libmachine: Provisioning with buildroot...
	I0804 00:43:32.498312   98453 main.go:141] libmachine: (addons-474272) Calling .GetMachineName
	I0804 00:43:32.498596   98453 buildroot.go:166] provisioning hostname "addons-474272"
	I0804 00:43:32.498627   98453 main.go:141] libmachine: (addons-474272) Calling .GetMachineName
	I0804 00:43:32.498836   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:32.501833   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.502154   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:32.502182   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.502300   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:32.502503   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:32.502665   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:32.502834   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:32.502983   98453 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:32.503187   98453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0804 00:43:32.503209   98453 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-474272 && echo "addons-474272" | sudo tee /etc/hostname
	I0804 00:43:32.615664   98453 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-474272
	
	I0804 00:43:32.615703   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:32.618485   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.618869   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:32.618897   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.619064   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:32.619271   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:32.619438   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:32.619580   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:32.619706   98453 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:32.619882   98453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0804 00:43:32.619899   98453 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-474272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-474272/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-474272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 00:43:32.726977   98453 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 00:43:32.727020   98453 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 00:43:32.727083   98453 buildroot.go:174] setting up certificates
	I0804 00:43:32.727101   98453 provision.go:84] configureAuth start
	I0804 00:43:32.727118   98453 main.go:141] libmachine: (addons-474272) Calling .GetMachineName
	I0804 00:43:32.727397   98453 main.go:141] libmachine: (addons-474272) Calling .GetIP
	I0804 00:43:32.729892   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.730266   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:32.730297   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.730400   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:32.732992   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.733329   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:32.733376   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.733544   98453 provision.go:143] copyHostCerts
	I0804 00:43:32.733638   98453 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 00:43:32.733801   98453 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 00:43:32.733903   98453 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 00:43:32.733988   98453 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.addons-474272 san=[127.0.0.1 192.168.39.127 addons-474272 localhost minikube]
	I0804 00:43:32.898552   98453 provision.go:177] copyRemoteCerts
	I0804 00:43:32.898698   98453 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 00:43:32.898732   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:32.901428   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.901861   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:32.901885   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:32.902168   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:32.902357   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:32.902494   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:32.902600   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:43:32.979902   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 00:43:33.004270   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 00:43:33.027228   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 00:43:33.050736   98453 provision.go:87] duration metric: took 323.61749ms to configureAuth
	I0804 00:43:33.050763   98453 buildroot.go:189] setting minikube options for container-runtime
	I0804 00:43:33.050980   98453 config.go:182] Loaded profile config "addons-474272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:43:33.051070   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:33.053791   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.054138   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:33.054160   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.054318   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:33.054503   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:33.054664   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:33.054804   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:33.055117   98453 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:33.055357   98453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0804 00:43:33.055379   98453 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 00:43:33.311833   98453 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 00:43:33.311871   98453 main.go:141] libmachine: Checking connection to Docker...
	I0804 00:43:33.311880   98453 main.go:141] libmachine: (addons-474272) Calling .GetURL
	I0804 00:43:33.313222   98453 main.go:141] libmachine: (addons-474272) DBG | Using libvirt version 6000000
	I0804 00:43:33.315405   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.315712   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:33.315746   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.315881   98453 main.go:141] libmachine: Docker is up and running!
	I0804 00:43:33.315899   98453 main.go:141] libmachine: Reticulating splines...
	I0804 00:43:33.315908   98453 client.go:171] duration metric: took 25.883241969s to LocalClient.Create
	I0804 00:43:33.315935   98453 start.go:167] duration metric: took 25.883310751s to libmachine.API.Create "addons-474272"
	I0804 00:43:33.315947   98453 start.go:293] postStartSetup for "addons-474272" (driver="kvm2")
	I0804 00:43:33.315963   98453 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 00:43:33.316012   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:43:33.316277   98453 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 00:43:33.316300   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:33.318461   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.318726   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:33.318865   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.318910   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:33.319090   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:33.319232   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:33.319380   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:43:33.395545   98453 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 00:43:33.399902   98453 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 00:43:33.399929   98453 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 00:43:33.399998   98453 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 00:43:33.400021   98453 start.go:296] duration metric: took 84.065042ms for postStartSetup
	I0804 00:43:33.400068   98453 main.go:141] libmachine: (addons-474272) Calling .GetConfigRaw
	I0804 00:43:33.400621   98453 main.go:141] libmachine: (addons-474272) Calling .GetIP
	I0804 00:43:33.403119   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.403499   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:33.403522   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.403726   98453 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/config.json ...
	I0804 00:43:33.403966   98453 start.go:128] duration metric: took 25.989469622s to createHost
	I0804 00:43:33.403994   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:33.405974   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.406237   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:33.406267   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.406364   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:33.406561   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:33.406716   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:33.406835   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:33.406995   98453 main.go:141] libmachine: Using SSH client type: native
	I0804 00:43:33.407221   98453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0804 00:43:33.407236   98453 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0804 00:43:33.502188   98453 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722732213.479319477
	
	I0804 00:43:33.502214   98453 fix.go:216] guest clock: 1722732213.479319477
	I0804 00:43:33.502224   98453 fix.go:229] Guest: 2024-08-04 00:43:33.479319477 +0000 UTC Remote: 2024-08-04 00:43:33.403981675 +0000 UTC m=+26.096132186 (delta=75.337802ms)
	I0804 00:43:33.502251   98453 fix.go:200] guest clock delta is within tolerance: 75.337802ms
	I0804 00:43:33.502258   98453 start.go:83] releasing machines lock for "addons-474272", held for 26.087842676s
	I0804 00:43:33.502286   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:43:33.502596   98453 main.go:141] libmachine: (addons-474272) Calling .GetIP
	I0804 00:43:33.506359   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.506687   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:33.506709   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.506863   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:43:33.507366   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:43:33.507558   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:43:33.507669   98453 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 00:43:33.507715   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:33.507819   98453 ssh_runner.go:195] Run: cat /version.json
	I0804 00:43:33.507846   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:43:33.510442   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.510580   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.510763   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:33.510786   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.510928   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:33.511063   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:33.511096   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:33.511117   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:33.511268   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:43:33.511289   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:33.511393   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:43:33.511463   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:43:33.511515   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:43:33.511666   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:43:33.586524   98453 ssh_runner.go:195] Run: systemctl --version
	I0804 00:43:33.606472   98453 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 00:43:33.769214   98453 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 00:43:33.775619   98453 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 00:43:33.775680   98453 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 00:43:33.795098   98453 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 00:43:33.795125   98453 start.go:495] detecting cgroup driver to use...
	I0804 00:43:33.795189   98453 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 00:43:33.816323   98453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 00:43:33.832204   98453 docker.go:217] disabling cri-docker service (if available) ...
	I0804 00:43:33.832261   98453 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 00:43:33.847150   98453 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 00:43:33.861722   98453 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 00:43:33.980637   98453 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 00:43:34.121124   98453 docker.go:233] disabling docker service ...
	I0804 00:43:34.121207   98453 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 00:43:34.136683   98453 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 00:43:34.150003   98453 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 00:43:34.298561   98453 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 00:43:34.427838   98453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 00:43:34.441854   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 00:43:34.459948   98453 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 00:43:34.460039   98453 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:34.470048   98453 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 00:43:34.470133   98453 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:34.479886   98453 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:34.489545   98453 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:34.499753   98453 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 00:43:34.511327   98453 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:34.521464   98453 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:34.538539   98453 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 00:43:34.548471   98453 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 00:43:34.558258   98453 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 00:43:34.558308   98453 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 00:43:34.570962   98453 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 00:43:34.582029   98453 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:43:34.706980   98453 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 00:43:34.847436   98453 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 00:43:34.847525   98453 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 00:43:34.852228   98453 start.go:563] Will wait 60s for crictl version
	I0804 00:43:34.852291   98453 ssh_runner.go:195] Run: which crictl
	I0804 00:43:34.856087   98453 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 00:43:34.896155   98453 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 00:43:34.896248   98453 ssh_runner.go:195] Run: crio --version
	I0804 00:43:34.928676   98453 ssh_runner.go:195] Run: crio --version
	I0804 00:43:34.960460   98453 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 00:43:34.961975   98453 main.go:141] libmachine: (addons-474272) Calling .GetIP
	I0804 00:43:34.964848   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:34.965341   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:43:34.965381   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:43:34.965590   98453 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 00:43:34.969950   98453 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:43:34.982787   98453 kubeadm.go:883] updating cluster {Name:addons-474272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-474272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 00:43:34.982910   98453 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:43:34.982953   98453 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:43:35.015373   98453 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 00:43:35.015439   98453 ssh_runner.go:195] Run: which lz4
	I0804 00:43:35.019553   98453 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0804 00:43:35.023879   98453 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 00:43:35.023909   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 00:43:36.364821   98453 crio.go:462] duration metric: took 1.345333638s to copy over tarball
	I0804 00:43:36.364897   98453 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 00:43:38.581897   98453 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216971907s)
	I0804 00:43:38.581928   98453 crio.go:469] duration metric: took 2.217074253s to extract the tarball
	I0804 00:43:38.581943   98453 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 00:43:38.620497   98453 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 00:43:38.661583   98453 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 00:43:38.661611   98453 cache_images.go:84] Images are preloaded, skipping loading
	I0804 00:43:38.661621   98453 kubeadm.go:934] updating node { 192.168.39.127 8443 v1.30.3 crio true true} ...
	I0804 00:43:38.661755   98453 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-474272 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-474272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 00:43:38.661850   98453 ssh_runner.go:195] Run: crio config
	I0804 00:43:38.705037   98453 cni.go:84] Creating CNI manager for ""
	I0804 00:43:38.705059   98453 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:43:38.705070   98453 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 00:43:38.705100   98453 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-474272 NodeName:addons-474272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 00:43:38.705274   98453 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-474272"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.127
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 00:43:38.705366   98453 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 00:43:38.714816   98453 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 00:43:38.714890   98453 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 00:43:38.723650   98453 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0804 00:43:38.740712   98453 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 00:43:38.757339   98453 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0804 00:43:38.774767   98453 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I0804 00:43:38.778674   98453 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.127	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 00:43:38.790840   98453 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:43:38.916083   98453 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:43:38.933163   98453 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272 for IP: 192.168.39.127
	I0804 00:43:38.933193   98453 certs.go:194] generating shared ca certs ...
	I0804 00:43:38.933215   98453 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:38.933409   98453 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 00:43:39.233616   98453 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt ...
	I0804 00:43:39.233648   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt: {Name:mke56838975bcbc355f1ff3c603b326847ed6da8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.233816   98453 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key ...
	I0804 00:43:39.233827   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key: {Name:mkd02158173bf698daf222246e028dbe3b23c584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.233911   98453 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 00:43:39.316755   98453 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt ...
	I0804 00:43:39.316783   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt: {Name:mk7bc95e9394a322359ee859a90230124a999e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.316938   98453 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key ...
	I0804 00:43:39.316948   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key: {Name:mk6922381818063063f3c4b73404c9e527266f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.317017   98453 certs.go:256] generating profile certs ...
	I0804 00:43:39.317076   98453 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/client.key
	I0804 00:43:39.317092   98453 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/client.crt with IP's: []
	I0804 00:43:39.464080   98453 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/client.crt ...
	I0804 00:43:39.464111   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/client.crt: {Name:mkd7efb9e65dc85a3e5a40e2e495b2fd6a9ddf53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.464275   98453 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/client.key ...
	I0804 00:43:39.464285   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/client.key: {Name:mk33082ce782e31893bbd05269f94838048816a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.464357   98453 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.key.14b6834a
	I0804 00:43:39.464377   98453 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.crt.14b6834a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.127]
	I0804 00:43:39.556492   98453 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.crt.14b6834a ...
	I0804 00:43:39.556521   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.crt.14b6834a: {Name:mk792300dbe1b3e5d9c1b9eb29c8cc9c2eff0a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.556679   98453 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.key.14b6834a ...
	I0804 00:43:39.556694   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.key.14b6834a: {Name:mk4ac133b56bf56ff47cfa62017b8ebe8ee12c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.556762   98453 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.crt.14b6834a -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.crt
	I0804 00:43:39.556833   98453 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.key.14b6834a -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.key
	I0804 00:43:39.556877   98453 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/proxy-client.key
	I0804 00:43:39.556900   98453 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/proxy-client.crt with IP's: []
	I0804 00:43:39.605461   98453 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/proxy-client.crt ...
	I0804 00:43:39.605490   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/proxy-client.crt: {Name:mk0bd832d59fffc29fce6b5632285dc2e55cbebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.605647   98453 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/proxy-client.key ...
	I0804 00:43:39.605657   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/proxy-client.key: {Name:mk228c57fdbc8e5968bb084b1c1d16622ae8ef35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:43:39.605816   98453 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 00:43:39.605855   98453 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 00:43:39.605886   98453 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 00:43:39.605912   98453 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 00:43:39.606488   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 00:43:39.633382   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 00:43:39.655368   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 00:43:39.677247   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 00:43:39.700402   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0804 00:43:39.724552   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 00:43:39.748677   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 00:43:39.773844   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/addons-474272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 00:43:39.798820   98453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 00:43:39.823217   98453 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 00:43:39.840969   98453 ssh_runner.go:195] Run: openssl version
	I0804 00:43:39.846801   98453 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 00:43:39.858044   98453 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:43:39.862549   98453 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:43:39.862608   98453 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 00:43:39.868663   98453 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 00:43:39.879655   98453 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 00:43:39.883753   98453 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 00:43:39.883814   98453 kubeadm.go:392] StartCluster: {Name:addons-474272 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-474272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:43:39.883903   98453 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 00:43:39.883943   98453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 00:43:39.918795   98453 cri.go:89] found id: ""
	I0804 00:43:39.918886   98453 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 00:43:39.929866   98453 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 00:43:39.940067   98453 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 00:43:39.950190   98453 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 00:43:39.950214   98453 kubeadm.go:157] found existing configuration files:
	
	I0804 00:43:39.950271   98453 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 00:43:39.959906   98453 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 00:43:39.959980   98453 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 00:43:39.969995   98453 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 00:43:39.979459   98453 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 00:43:39.979534   98453 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 00:43:39.989160   98453 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 00:43:40.000202   98453 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 00:43:40.000266   98453 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 00:43:40.011625   98453 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 00:43:40.022458   98453 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 00:43:40.022526   98453 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 00:43:40.032815   98453 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 00:43:40.226556   98453 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 00:43:50.917537   98453 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0804 00:43:50.917644   98453 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 00:43:50.917774   98453 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 00:43:50.917897   98453 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 00:43:50.918026   98453 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 00:43:50.918131   98453 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 00:43:50.919707   98453 out.go:204]   - Generating certificates and keys ...
	I0804 00:43:50.919789   98453 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 00:43:50.919850   98453 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 00:43:50.919928   98453 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 00:43:50.920005   98453 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 00:43:50.920073   98453 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 00:43:50.920138   98453 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 00:43:50.920210   98453 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 00:43:50.920393   98453 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-474272 localhost] and IPs [192.168.39.127 127.0.0.1 ::1]
	I0804 00:43:50.920474   98453 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 00:43:50.920591   98453 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-474272 localhost] and IPs [192.168.39.127 127.0.0.1 ::1]
	I0804 00:43:50.920646   98453 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 00:43:50.920729   98453 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 00:43:50.920793   98453 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 00:43:50.920879   98453 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 00:43:50.920940   98453 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 00:43:50.921015   98453 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 00:43:50.921081   98453 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 00:43:50.921137   98453 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 00:43:50.921216   98453 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 00:43:50.921334   98453 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 00:43:50.921430   98453 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 00:43:50.922840   98453 out.go:204]   - Booting up control plane ...
	I0804 00:43:50.922922   98453 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 00:43:50.922989   98453 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 00:43:50.923044   98453 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 00:43:50.923139   98453 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 00:43:50.923209   98453 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 00:43:50.923242   98453 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 00:43:50.923377   98453 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 00:43:50.923440   98453 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0804 00:43:50.923488   98453 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.225459ms
	I0804 00:43:50.923554   98453 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 00:43:50.923601   98453 kubeadm.go:310] [api-check] The API server is healthy after 5.502131347s
	I0804 00:43:50.923732   98453 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 00:43:50.923841   98453 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 00:43:50.923900   98453 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 00:43:50.924074   98453 kubeadm.go:310] [mark-control-plane] Marking the node addons-474272 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 00:43:50.924153   98453 kubeadm.go:310] [bootstrap-token] Using token: irz5y1.qxs1g2876cz6h2mv
	I0804 00:43:50.925595   98453 out.go:204]   - Configuring RBAC rules ...
	I0804 00:43:50.925723   98453 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 00:43:50.925838   98453 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 00:43:50.926029   98453 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 00:43:50.926175   98453 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 00:43:50.926279   98453 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 00:43:50.926358   98453 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 00:43:50.926458   98453 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 00:43:50.926496   98453 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 00:43:50.926541   98453 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 00:43:50.926547   98453 kubeadm.go:310] 
	I0804 00:43:50.926600   98453 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 00:43:50.926617   98453 kubeadm.go:310] 
	I0804 00:43:50.926725   98453 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 00:43:50.926737   98453 kubeadm.go:310] 
	I0804 00:43:50.926778   98453 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 00:43:50.926855   98453 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 00:43:50.926919   98453 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 00:43:50.926929   98453 kubeadm.go:310] 
	I0804 00:43:50.926996   98453 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 00:43:50.927005   98453 kubeadm.go:310] 
	I0804 00:43:50.927085   98453 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 00:43:50.927094   98453 kubeadm.go:310] 
	I0804 00:43:50.927169   98453 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 00:43:50.927266   98453 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 00:43:50.927362   98453 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 00:43:50.927372   98453 kubeadm.go:310] 
	I0804 00:43:50.927487   98453 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 00:43:50.927588   98453 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 00:43:50.927597   98453 kubeadm.go:310] 
	I0804 00:43:50.927704   98453 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token irz5y1.qxs1g2876cz6h2mv \
	I0804 00:43:50.927847   98453 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e \
	I0804 00:43:50.927877   98453 kubeadm.go:310] 	--control-plane 
	I0804 00:43:50.927883   98453 kubeadm.go:310] 
	I0804 00:43:50.927991   98453 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 00:43:50.928000   98453 kubeadm.go:310] 
	I0804 00:43:50.928110   98453 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token irz5y1.qxs1g2876cz6h2mv \
	I0804 00:43:50.928261   98453 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e 
	I0804 00:43:50.928276   98453 cni.go:84] Creating CNI manager for ""
	I0804 00:43:50.928287   98453 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:43:50.929889   98453 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 00:43:50.931248   98453 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 00:43:50.943438   98453 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 00:43:50.964781   98453 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 00:43:50.964885   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:50.964879   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-474272 minikube.k8s.io/updated_at=2024_08_04T00_43_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=addons-474272 minikube.k8s.io/primary=true
	I0804 00:43:50.990659   98453 ops.go:34] apiserver oom_adj: -16
	I0804 00:43:51.110170   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:51.611044   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:52.111011   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:52.610910   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:53.110485   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:53.611138   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:54.110784   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:54.611052   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:55.110652   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:55.610286   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:56.110551   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:56.610349   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:57.110810   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:57.610375   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:58.110656   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:58.611035   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:59.110374   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:43:59.610635   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:44:00.111265   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:44:00.611261   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:44:01.110538   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:44:01.611173   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:44:02.110193   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:44:02.610620   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:44:03.111261   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:44:03.611088   98453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 00:44:03.711593   98453 kubeadm.go:1113] duration metric: took 12.746800341s to wait for elevateKubeSystemPrivileges
	I0804 00:44:03.711644   98453 kubeadm.go:394] duration metric: took 23.827836698s to StartCluster
	I0804 00:44:03.711667   98453 settings.go:142] acquiring lock: {Name:mkf532aceb8d8524495256eb01b2b67c117281c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:44:03.711813   98453 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 00:44:03.712323   98453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/kubeconfig: {Name:mk9db0d5521301bbe44f571d0153ba4b675d0242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:44:03.712538   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0804 00:44:03.712556   98453 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 00:44:03.712621   98453 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0804 00:44:03.712722   98453 addons.go:69] Setting yakd=true in profile "addons-474272"
	I0804 00:44:03.712763   98453 addons.go:234] Setting addon yakd=true in "addons-474272"
	I0804 00:44:03.712770   98453 addons.go:69] Setting registry=true in profile "addons-474272"
	I0804 00:44:03.712779   98453 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-474272"
	I0804 00:44:03.712797   98453 addons.go:234] Setting addon registry=true in "addons-474272"
	I0804 00:44:03.712799   98453 config.go:182] Loaded profile config "addons-474272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:44:03.712751   98453 addons.go:69] Setting ingress-dns=true in profile "addons-474272"
	I0804 00:44:03.712825   98453 addons.go:234] Setting addon ingress-dns=true in "addons-474272"
	I0804 00:44:03.712835   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.712841   98453 addons.go:69] Setting helm-tiller=true in profile "addons-474272"
	I0804 00:44:03.712826   98453 addons.go:69] Setting gcp-auth=true in profile "addons-474272"
	I0804 00:44:03.712854   98453 addons.go:69] Setting inspektor-gadget=true in profile "addons-474272"
	I0804 00:44:03.712860   98453 addons.go:69] Setting volcano=true in profile "addons-474272"
	I0804 00:44:03.712866   98453 addons.go:234] Setting addon helm-tiller=true in "addons-474272"
	I0804 00:44:03.712859   98453 addons.go:69] Setting storage-provisioner=true in profile "addons-474272"
	I0804 00:44:03.712869   98453 addons.go:69] Setting volumesnapshots=true in profile "addons-474272"
	I0804 00:44:03.712877   98453 addons.go:234] Setting addon volcano=true in "addons-474272"
	I0804 00:44:03.712881   98453 addons.go:69] Setting default-storageclass=true in profile "addons-474272"
	I0804 00:44:03.712880   98453 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-474272"
	I0804 00:44:03.712891   98453 addons.go:234] Setting addon storage-provisioner=true in "addons-474272"
	I0804 00:44:03.712898   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.712901   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.712905   98453 addons.go:234] Setting addon volumesnapshots=true in "addons-474272"
	I0804 00:44:03.712908   98453 addons.go:69] Setting ingress=true in profile "addons-474272"
	I0804 00:44:03.712906   98453 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-474272"
	I0804 00:44:03.712919   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.712924   98453 addons.go:234] Setting addon ingress=true in "addons-474272"
	I0804 00:44:03.712943   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.712950   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.712902   98453 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-474272"
	I0804 00:44:03.713003   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.712774   98453 addons.go:69] Setting cloud-spanner=true in profile "addons-474272"
	I0804 00:44:03.713144   98453 addons.go:234] Setting addon cloud-spanner=true in "addons-474272"
	I0804 00:44:03.713202   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.712805   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.713392   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713400   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713418   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713428   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713441   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.713444   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.712824   98453 addons.go:69] Setting metrics-server=true in profile "addons-474272"
	I0804 00:44:03.713466   98453 addons.go:234] Setting addon metrics-server=true in "addons-474272"
	I0804 00:44:03.713487   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.713486   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713443   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.713522   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.712854   98453 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-474272"
	I0804 00:44:03.713555   98453 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-474272"
	I0804 00:44:03.713676   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713700   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713744   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.713813   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713846   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.713872   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713894   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.712836   98453 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-474272"
	I0804 00:44:03.713679   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.713709   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.713962   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.712855   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.712876   98453 mustload.go:65] Loading cluster: addons-474272
	I0804 00:44:03.714263   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.714307   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.714344   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.714311   98453 config.go:182] Loaded profile config "addons-474272": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 00:44:03.714380   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.714428   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.714683   98453 out.go:177] * Verifying Kubernetes components...
	I0804 00:44:03.712875   98453 addons.go:234] Setting addon inspektor-gadget=true in "addons-474272"
	I0804 00:44:03.713720   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.715290   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.715308   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.716203   98453 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 00:44:03.716296   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.747380   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
	I0804 00:44:03.747480   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I0804 00:44:03.747727   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0804 00:44:03.747969   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.748097   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.748416   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.748596   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.748611   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.748925   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.749124   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.749142   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.749176   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0804 00:44:03.749285   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.749304   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.749635   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.749681   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.749764   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.749798   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.749947   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.749974   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.750167   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.750204   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.750419   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.750457   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.750491   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.750943   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.750960   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.751039   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.751070   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.751351   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.751889   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.751928   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.752178   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0804 00:44:03.752878   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.752917   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.753113   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45107
	I0804 00:44:03.753483   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.753744   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.753942   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.753955   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.754288   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.754813   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.754854   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.762152   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.762181   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.762274   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0804 00:44:03.762405   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43405
	I0804 00:44:03.762478   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0804 00:44:03.762537   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35013
	I0804 00:44:03.762589   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38721
	I0804 00:44:03.763508   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.763516   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.763730   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.764060   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.764086   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.764526   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.764567   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.764612   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.764686   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.765309   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.765331   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.765481   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.765500   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.765543   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.765628   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.765650   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.765846   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.766008   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.766086   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.766574   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.766611   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.766954   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.766968   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.768851   98453 addons.go:234] Setting addon default-storageclass=true in "addons-474272"
	I0804 00:44:03.768900   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.769267   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.769286   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.769447   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.770034   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.770070   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.770403   98453 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-474272"
	I0804 00:44:03.770439   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.770820   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.770867   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.771106   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.771664   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.771699   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.771732   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.773632   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:03.774012   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.774052   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.791357   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0804 00:44:03.791990   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.792668   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.792690   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.793105   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.793779   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.793823   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.794033   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38053
	I0804 00:44:03.794617   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.795365   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.795389   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.795968   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.796032   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36097
	I0804 00:44:03.796509   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.797022   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.797039   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.797450   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.798178   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.798219   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.798663   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.799516   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0804 00:44:03.801542   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.802103   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.802130   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.802143   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.802447   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:03.802463   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:03.802522   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.803121   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.803143   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:03.803170   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:03.803169   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.803244   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:03.803253   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:03.803260   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:03.803633   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:03.803643   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:03.803653   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	W0804 00:44:03.803747   98453 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0804 00:44:03.808466   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36405
	I0804 00:44:03.808531   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39219
	I0804 00:44:03.808906   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.809115   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.809726   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.809751   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.810206   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.810805   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.810872   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.811599   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33909
	I0804 00:44:03.812108   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.812564   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.812579   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.812971   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.813153   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.813272   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.813290   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.813851   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.814387   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.814425   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.815481   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0804 00:44:03.816068   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.816692   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.816714   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.817193   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.817814   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.817863   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.829114   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36067
	I0804 00:44:03.829312   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I0804 00:44:03.829941   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.830695   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.830715   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.830776   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0804 00:44:03.831323   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.831542   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.832662   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I0804 00:44:03.833012   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.833527   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.833553   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.833880   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.834063   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.834145   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.835504   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.835783   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.836060   98453 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 00:44:03.836075   98453 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 00:44:03.836094   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.836123   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38735
	I0804 00:44:03.836502   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.836520   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.836535   98453 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0804 00:44:03.837130   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I0804 00:44:03.837766   98453 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0804 00:44:03.837788   98453 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0804 00:44:03.837804   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.837806   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.837868   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34925
	I0804 00:44:03.837945   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.838251   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.838749   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.838771   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.838845   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.839203   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.839382   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.839422   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37463
	I0804 00:44:03.840038   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.840041   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.840095   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.840612   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.840638   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.841001   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.841096   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.841252   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.841813   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.842166   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:03.842214   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:03.842298   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.842313   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.842373   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32791
	I0804 00:44:03.842933   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.842995   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.843011   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.843043   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.843128   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.843188   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.843237   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.843395   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.843480   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.843494   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.843529   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.843624   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.843634   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.843632   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.843922   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.843953   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.843995   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34357
	I0804 00:44:03.844193   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.844377   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.845483   98453 out.go:177]   - Using image docker.io/registry:2.8.3
	I0804 00:44:03.845601   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.845641   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.845706   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0804 00:44:03.845722   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.845749   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.845954   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.845976   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.846238   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.846369   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.846691   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.847202   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35493
	I0804 00:44:03.847352   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.847408   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.847726   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39945
	I0804 00:44:03.847759   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.848336   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.848354   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.848337   98453 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0804 00:44:03.848429   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.848567   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.848850   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.848868   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.848608   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.849137   98453 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0804 00:44:03.849202   98453 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0804 00:44:03.849429   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.849687   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.849451   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.849719   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.849745   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.849664   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.849919   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.850067   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.850081   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.850082   98453 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0804 00:44:03.850194   98453 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0804 00:44:03.850556   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0804 00:44:03.850574   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.850215   98453 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0804 00:44:03.850505   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.851135   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.851137   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.851677   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.851693   98453 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0804 00:44:03.851717   98453 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0804 00:44:03.851729   98453 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0804 00:44:03.851736   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.851738   98453 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0804 00:44:03.851752   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.851955   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I0804 00:44:03.852145   98453 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0804 00:44:03.852163   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0804 00:44:03.852184   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.852390   98453 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0804 00:44:03.852433   98453 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0804 00:44:03.852445   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.852848   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.853546   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.854222   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.854246   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.854846   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.854983   98453 out.go:177]   - Using image docker.io/busybox:stable
	I0804 00:44:03.855085   98453 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0804 00:44:03.855090   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.855276   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.855407   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.855442   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.855561   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.855791   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.856067   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.856553   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.856617   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.856824   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.857054   98453 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0804 00:44:03.857494   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0804 00:44:03.857516   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.857162   98453 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0804 00:44:03.857575   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0804 00:44:03.857589   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.857901   98453 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0804 00:44:03.858169   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.858239   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.858947   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.858980   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.859001   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.859211   98453 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0804 00:44:03.859441   98453 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0804 00:44:03.859686   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0804 00:44:03.859703   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.859479   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.860148   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.860333   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.860350   98453 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 00:44:03.860685   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.860922   98453 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0804 00:44:03.860945   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.861232   98453 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0804 00:44:03.861249   98453 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0804 00:44:03.861266   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.862265   98453 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0804 00:44:03.862380   98453 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:44:03.862396   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 00:44:03.862416   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.863262   98453 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0804 00:44:03.863300   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.863412   98453 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0804 00:44:03.863424   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0804 00:44:03.863439   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.864070   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.864482   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.864894   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.864915   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.865097   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.865160   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.865232   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.865292   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.865434   98453 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0804 00:44:03.865551   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.865551   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.865587   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.865603   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.865630   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.865728   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.865794   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.865842   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.865877   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.866721   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.866721   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.866735   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.866751   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.866930   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.867160   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.867219   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.867381   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.867403   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.867446   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.867608   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.867748   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.867850   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.867855   98453 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0804 00:44:03.868009   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.868241   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.868265   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.868425   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.868685   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.868720   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.868730   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.868818   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.868737   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.868843   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.869257   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.869346   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.869467   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.869537   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.869599   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.869705   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.869732   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.869826   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.869912   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.870172   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.870196   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.870336   98453 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0804 00:44:03.870341   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.870577   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.870672   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.870757   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.872414   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43119
	I0804 00:44:03.872566   98453 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0804 00:44:03.872757   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:03.873283   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:03.873300   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:03.873680   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:03.873964   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:03.874917   98453 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0804 00:44:03.875503   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:03.877039   98453 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0804 00:44:03.877064   98453 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0804 00:44:03.879184   98453 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0804 00:44:03.879205   98453 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0804 00:44:03.879223   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.879268   98453 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0804 00:44:03.879288   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0804 00:44:03.879305   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:03.882465   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.882937   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.882952   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.883176   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.883367   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.883532   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.883680   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:03.884283   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.884623   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:03.884646   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:03.884878   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:03.885089   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:03.885266   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:03.885477   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	W0804 00:44:03.902428   98453 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40206->192.168.39.127:22: read: connection reset by peer
	I0804 00:44:03.902465   98453 retry.go:31] will retry after 293.235972ms: ssh: handshake failed: read tcp 192.168.39.1:40206->192.168.39.127:22: read: connection reset by peer
	W0804 00:44:03.902548   98453 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40208->192.168.39.127:22: read: connection reset by peer
	I0804 00:44:03.902560   98453 retry.go:31] will retry after 371.51808ms: ssh: handshake failed: read tcp 192.168.39.1:40208->192.168.39.127:22: read: connection reset by peer
	I0804 00:44:04.171689   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0804 00:44:04.239777   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0804 00:44:04.241427   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 00:44:04.333637   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 00:44:04.353432   98453 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0804 00:44:04.353463   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0804 00:44:04.419069   98453 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0804 00:44:04.419096   98453 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0804 00:44:04.421239   98453 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0804 00:44:04.421260   98453 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0804 00:44:04.430302   98453 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0804 00:44:04.430321   98453 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0804 00:44:04.433308   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0804 00:44:04.497265   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0804 00:44:04.501647   98453 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0804 00:44:04.501673   98453 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0804 00:44:04.530786   98453 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0804 00:44:04.530815   98453 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0804 00:44:04.571259   98453 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0804 00:44:04.571288   98453 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0804 00:44:04.643478   98453 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0804 00:44:04.643504   98453 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0804 00:44:04.669554   98453 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0804 00:44:04.669576   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0804 00:44:04.686718   98453 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0804 00:44:04.686743   98453 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0804 00:44:04.761890   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0804 00:44:04.785975   98453 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0804 00:44:04.786005   98453 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0804 00:44:04.825410   98453 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0804 00:44:04.825436   98453 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0804 00:44:04.842424   98453 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.129854305s)
	I0804 00:44:04.842499   98453 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.126269235s)
	I0804 00:44:04.842573   98453 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 00:44:04.842596   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0804 00:44:04.849031   98453 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:44:04.849066   98453 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0804 00:44:04.870691   98453 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0804 00:44:04.870724   98453 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0804 00:44:04.897757   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0804 00:44:04.940078   98453 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0804 00:44:04.940115   98453 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0804 00:44:04.977924   98453 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0804 00:44:04.977957   98453 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0804 00:44:04.995788   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0804 00:44:05.061937   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0804 00:44:05.066805   98453 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0804 00:44:05.066833   98453 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0804 00:44:05.082929   98453 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0804 00:44:05.082967   98453 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0804 00:44:05.172949   98453 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0804 00:44:05.172978   98453 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0804 00:44:05.192679   98453 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0804 00:44:05.192704   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0804 00:44:05.313744   98453 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0804 00:44:05.313771   98453 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0804 00:44:05.387755   98453 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0804 00:44:05.387787   98453 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0804 00:44:05.404055   98453 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0804 00:44:05.404082   98453 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0804 00:44:05.451459   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0804 00:44:05.525505   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.353772209s)
	I0804 00:44:05.525566   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:05.525580   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:05.525875   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:05.525891   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:05.525901   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:05.525909   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:05.526244   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:05.526265   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:05.526266   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:05.628504   98453 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0804 00:44:05.628525   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0804 00:44:05.706705   98453 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0804 00:44:05.706737   98453 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0804 00:44:05.819075   98453 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0804 00:44:05.819099   98453 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0804 00:44:05.991637   98453 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0804 00:44:05.991667   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0804 00:44:05.997704   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0804 00:44:06.005147   98453 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0804 00:44:06.005174   98453 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0804 00:44:06.276141   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0804 00:44:06.330089   98453 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0804 00:44:06.330116   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0804 00:44:06.668356   98453 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0804 00:44:06.668385   98453 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0804 00:44:06.919237   98453 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0804 00:44:06.919277   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0804 00:44:07.238494   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.998677463s)
	I0804 00:44:07.238557   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:07.238567   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:07.238925   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:07.239000   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:07.239022   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:07.239035   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:07.239048   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:07.239474   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:07.239491   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:07.345111   98453 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0804 00:44:07.345134   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0804 00:44:07.543094   98453 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0804 00:44:07.543121   98453 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0804 00:44:08.036250   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0804 00:44:09.795828   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.554365087s)
	I0804 00:44:09.795885   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:09.795898   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:09.795898   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.462226019s)
	I0804 00:44:09.795946   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:09.795974   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:09.796214   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:09.796258   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:09.796268   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:09.796277   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:09.796285   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:09.796299   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:09.796321   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:09.796331   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:09.796337   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:09.796346   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:09.796471   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:09.796483   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:09.796635   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:09.796680   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:09.796692   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:09.875158   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:09.875177   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:09.875473   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:09.875489   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:10.104766   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.671422899s)
	I0804 00:44:10.104824   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:10.104834   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:10.105155   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:10.105205   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:10.105224   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:10.105243   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:10.105514   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:10.105531   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:10.105638   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:10.198197   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:10.198219   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:10.198511   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:10.198532   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:10.836369   98453 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0804 00:44:10.836414   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:10.839571   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:10.840101   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:10.840130   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:10.840348   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:10.840631   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:10.840790   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:10.840971   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:11.500835   98453 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0804 00:44:11.736562   98453 addons.go:234] Setting addon gcp-auth=true in "addons-474272"
	I0804 00:44:11.736629   98453 host.go:66] Checking if "addons-474272" exists ...
	I0804 00:44:11.737144   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:11.737186   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:11.752379   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36707
	I0804 00:44:11.752821   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:11.753344   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:11.753388   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:11.753797   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:11.754284   98453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 00:44:11.754311   98453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 00:44:11.770166   98453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
	I0804 00:44:11.770709   98453 main.go:141] libmachine: () Calling .GetVersion
	I0804 00:44:11.771285   98453 main.go:141] libmachine: Using API Version  1
	I0804 00:44:11.771308   98453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 00:44:11.771635   98453 main.go:141] libmachine: () Calling .GetMachineName
	I0804 00:44:11.771834   98453 main.go:141] libmachine: (addons-474272) Calling .GetState
	I0804 00:44:11.773527   98453 main.go:141] libmachine: (addons-474272) Calling .DriverName
	I0804 00:44:11.773756   98453 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0804 00:44:11.773782   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHHostname
	I0804 00:44:11.776771   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:11.777155   98453 main.go:141] libmachine: (addons-474272) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:d9:6e", ip: ""} in network mk-addons-474272: {Iface:virbr1 ExpiryTime:2024-08-04 01:43:22 +0000 UTC Type:0 Mac:52:54:00:a6:d9:6e Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:addons-474272 Clientid:01:52:54:00:a6:d9:6e}
	I0804 00:44:11.777188   98453 main.go:141] libmachine: (addons-474272) DBG | domain addons-474272 has defined IP address 192.168.39.127 and MAC address 52:54:00:a6:d9:6e in network mk-addons-474272
	I0804 00:44:11.777342   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHPort
	I0804 00:44:11.777530   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHKeyPath
	I0804 00:44:11.777700   98453 main.go:141] libmachine: (addons-474272) Calling .GetSSHUsername
	I0804 00:44:11.777829   98453 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/addons-474272/id_rsa Username:docker}
	I0804 00:44:12.851575   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.354268617s)
	I0804 00:44:12.851643   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.851649   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.089720751s)
	I0804 00:44:12.851693   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.851714   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.851732   98453 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.009136179s)
	I0804 00:44:12.851657   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.851695   98453 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.0090813s)
	I0804 00:44:12.851828   98453 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0804 00:44:12.851831   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.954038303s)
	I0804 00:44:12.851919   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.856087034s)
	I0804 00:44:12.851953   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.851970   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.851993   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.852016   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.852108   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.790132743s)
	I0804 00:44:12.852130   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.852140   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.852180   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:12.852212   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.400724624s)
	I0804 00:44:12.852223   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.852227   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.852231   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.852238   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.852243   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.852250   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.852328   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.852342   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.852350   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.852358   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.852370   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.85462643s)
	W0804 00:44:12.852396   98453 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0804 00:44:12.852419   98453 retry.go:31] will retry after 238.438809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0804 00:44:12.852434   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:12.852500   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.576326465s)
	I0804 00:44:12.852516   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.852525   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.852775   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:12.852799   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.852810   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.852821   98453 addons.go:475] Verifying addon ingress=true in "addons-474272"
	I0804 00:44:12.852990   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:12.853022   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.853037   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.853040   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:12.853049   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.853058   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.853065   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.853066   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.853126   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.853135   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.853142   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.853198   98453 node_ready.go:35] waiting up to 6m0s for node "addons-474272" to be "Ready" ...
	I0804 00:44:12.853432   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:12.853451   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.853460   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.853469   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.853479   98453 addons.go:475] Verifying addon registry=true in "addons-474272"
	I0804 00:44:12.853787   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:12.853813   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.853819   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.854215   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.854226   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.854235   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.854243   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.854566   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:12.854656   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.854664   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.854703   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.854765   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.855159   98453 out.go:177] * Verifying ingress addon...
	I0804 00:44:12.855363   98453 out.go:177] * Verifying registry addon...
	I0804 00:44:12.855559   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.855857   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.855868   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:12.855875   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:12.856155   98453 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-474272 service yakd-dashboard -n yakd-dashboard
	
	I0804 00:44:12.856282   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:12.856290   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.856302   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.856497   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.856510   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.856520   98453 addons.go:475] Verifying addon metrics-server=true in "addons-474272"
	I0804 00:44:12.856764   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:12.856774   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:12.858106   98453 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0804 00:44:12.858235   98453 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0804 00:44:12.867285   98453 node_ready.go:49] node "addons-474272" has status "Ready":"True"
	I0804 00:44:12.867307   98453 node_ready.go:38] duration metric: took 14.09237ms for node "addons-474272" to be "Ready" ...
	I0804 00:44:12.867316   98453 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:44:12.877259   98453 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0804 00:44:12.877287   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:12.880596   98453 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0804 00:44:12.880615   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:12.901692   98453 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-44tpd" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:12.914803   98453 pod_ready.go:92] pod "coredns-7db6d8ff4d-44tpd" in "kube-system" namespace has status "Ready":"True"
	I0804 00:44:12.914838   98453 pod_ready.go:81] duration metric: took 13.107427ms for pod "coredns-7db6d8ff4d-44tpd" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:12.914851   98453 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dbbtm" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:12.931260   98453 pod_ready.go:92] pod "coredns-7db6d8ff4d-dbbtm" in "kube-system" namespace has status "Ready":"True"
	I0804 00:44:12.931288   98453 pod_ready.go:81] duration metric: took 16.429272ms for pod "coredns-7db6d8ff4d-dbbtm" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:12.931300   98453 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-474272" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:12.967666   98453 pod_ready.go:92] pod "etcd-addons-474272" in "kube-system" namespace has status "Ready":"True"
	I0804 00:44:12.967701   98453 pod_ready.go:81] duration metric: took 36.392972ms for pod "etcd-addons-474272" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:12.967715   98453 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-474272" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:12.993520   98453 pod_ready.go:92] pod "kube-apiserver-addons-474272" in "kube-system" namespace has status "Ready":"True"
	I0804 00:44:12.993544   98453 pod_ready.go:81] duration metric: took 25.820913ms for pod "kube-apiserver-addons-474272" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:12.993554   98453 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-474272" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:13.091936   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0804 00:44:13.256058   98453 pod_ready.go:92] pod "kube-controller-manager-addons-474272" in "kube-system" namespace has status "Ready":"True"
	I0804 00:44:13.256084   98453 pod_ready.go:81] duration metric: took 262.52035ms for pod "kube-controller-manager-addons-474272" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:13.256097   98453 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wlj57" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:13.355282   98453 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-474272" context rescaled to 1 replicas
	I0804 00:44:13.365439   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:13.367921   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:13.657221   98453 pod_ready.go:92] pod "kube-proxy-wlj57" in "kube-system" namespace has status "Ready":"True"
	I0804 00:44:13.657245   98453 pod_ready.go:81] duration metric: took 401.130682ms for pod "kube-proxy-wlj57" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:13.657258   98453 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-474272" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:13.871022   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:13.871333   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:14.072496   98453 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.298711127s)
	I0804 00:44:14.072865   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.036532353s)
	I0804 00:44:14.072941   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:14.072961   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:14.073262   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:14.073281   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:14.073309   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:14.073389   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:14.073411   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:14.073676   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:14.073691   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:14.073704   98453 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-474272"
	I0804 00:44:14.074633   98453 pod_ready.go:92] pod "kube-scheduler-addons-474272" in "kube-system" namespace has status "Ready":"True"
	I0804 00:44:14.074657   98453 pod_ready.go:81] duration metric: took 417.389929ms for pod "kube-scheduler-addons-474272" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:14.074671   98453 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace to be "Ready" ...
	I0804 00:44:14.075473   98453 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0804 00:44:14.075556   98453 out.go:177] * Verifying csi-hostpath-driver addon...
	I0804 00:44:14.077502   98453 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0804 00:44:14.078171   98453 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0804 00:44:14.078838   98453 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0804 00:44:14.078872   98453 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0804 00:44:14.090228   98453 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0804 00:44:14.090261   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:14.136540   98453 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0804 00:44:14.136574   98453 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0804 00:44:14.174744   98453 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0804 00:44:14.174777   98453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0804 00:44:14.240679   98453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0804 00:44:14.363740   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:14.364561   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:14.586285   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:14.672065   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.580056788s)
	I0804 00:44:14.672154   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:14.672174   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:14.672512   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:14.672519   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:14.672531   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:14.672540   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:14.672550   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:14.672804   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:14.672848   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:14.863645   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:14.863932   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:15.083589   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:15.401419   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:15.402788   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:15.500798   98453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.260070592s)
	I0804 00:44:15.500872   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:15.500897   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:15.501208   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:15.501233   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:15.501243   98453 main.go:141] libmachine: Making call to close driver server
	I0804 00:44:15.501233   98453 main.go:141] libmachine: (addons-474272) DBG | Closing plugin on server side
	I0804 00:44:15.501252   98453 main.go:141] libmachine: (addons-474272) Calling .Close
	I0804 00:44:15.501508   98453 main.go:141] libmachine: Successfully made call to close driver server
	I0804 00:44:15.501523   98453 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 00:44:15.502636   98453 addons.go:475] Verifying addon gcp-auth=true in "addons-474272"
	I0804 00:44:15.504275   98453 out.go:177] * Verifying gcp-auth addon...
	I0804 00:44:15.506281   98453 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0804 00:44:15.541105   98453 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0804 00:44:15.541128   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:15.610316   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:15.868811   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:15.869492   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:16.010959   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:16.087460   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:16.087527   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:16.363735   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:16.365983   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:16.510496   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:16.584458   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:16.867185   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:16.867353   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:17.010703   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:17.089441   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:17.362939   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:17.363832   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:17.510599   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:17.588265   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:17.864093   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:17.864682   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:18.010666   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:18.085435   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:18.363938   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:18.364072   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:18.509617   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:18.583634   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:18.584039   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:18.863878   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:18.864075   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:19.010229   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:19.083510   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:19.366466   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:19.368397   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:19.785716   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:19.794973   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:19.864690   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:19.864894   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:20.010422   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:20.083415   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:20.367102   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:20.367401   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:20.510070   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:20.583201   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:20.864977   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:20.868368   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:21.023877   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:21.083466   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:21.084526   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:21.363092   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:21.363397   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:21.510272   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:21.584619   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:21.873002   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:21.875318   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:22.010081   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:22.084481   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:22.363092   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:22.363437   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:22.510298   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:22.583609   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:22.864109   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:22.865461   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:23.010396   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:23.083629   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:23.363511   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:23.364370   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:23.510652   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:23.580575   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:23.582992   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:23.863711   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:23.863875   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:24.009876   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:24.083698   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:24.364652   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:24.366131   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:24.510715   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:24.583452   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:24.865416   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:24.866014   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:25.011279   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:25.083266   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:25.364705   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:25.365866   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:25.510088   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:25.581676   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:25.584966   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:25.864429   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:25.865707   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:26.010979   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:26.083637   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:26.363653   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:26.363898   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:26.510098   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:26.583186   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:26.862033   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:26.862861   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:27.010987   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:27.083760   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:27.364393   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:27.364427   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:27.510507   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:27.582188   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:27.584848   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:27.864543   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:27.864556   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:28.011355   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:28.084179   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:28.364261   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:28.364635   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:28.510778   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:28.587913   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:28.863371   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:28.878468   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:29.010515   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:29.083812   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:29.362752   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:29.365398   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:29.510098   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:29.583433   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:29.583509   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:29.864378   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:29.871019   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:30.010572   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:30.085273   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:30.363766   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:30.363859   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:30.511671   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:30.583082   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:30.863962   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:30.864386   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:31.010905   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:31.082754   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:31.364712   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:31.364943   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:31.509626   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:31.584114   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:31.870559   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:31.871503   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:32.010631   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:32.462298   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:32.462439   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:32.463568   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:32.464780   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:32.510205   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:32.584346   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:32.863657   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:32.864567   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:33.010774   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:33.087153   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:33.363596   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:33.364343   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:33.510419   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:33.584967   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:33.863933   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:33.864389   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:34.010587   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:34.083031   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:34.363538   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:34.367576   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:34.510796   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:34.581581   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:34.583061   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:34.863949   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:34.864543   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:35.010474   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:35.084763   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:35.364286   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:35.365510   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:35.511929   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:35.582933   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:35.863409   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:35.865186   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:36.010360   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:36.085699   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:36.363905   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:36.365336   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:36.510520   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:36.583248   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:36.589511   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:36.866719   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:36.872259   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:37.010625   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:37.085187   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:37.364150   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:37.364994   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:37.510195   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:37.583083   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:37.863490   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:37.863931   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:38.009832   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:38.089284   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:38.364843   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:38.365323   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:38.509780   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:38.583752   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:38.864511   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:38.865692   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:39.009629   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:39.081616   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:39.083210   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:39.364508   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:39.364742   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:39.510561   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:39.589139   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:39.863410   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:39.863653   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:40.009677   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:40.082930   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:40.363321   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0804 00:44:40.364201   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:40.510656   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:40.583777   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:40.863132   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:40.863202   98453 kapi.go:107] duration metric: took 28.004969933s to wait for kubernetes.io/minikube-addons=registry ...
	I0804 00:44:41.010538   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:41.083311   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:41.367846   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:41.511630   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:41.581880   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:41.584637   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:41.863438   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:42.010444   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:42.083830   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:42.363467   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:42.510873   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:42.588765   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:42.863287   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:43.009758   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:43.082872   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:43.362269   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:43.562623   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:43.584185   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:43.589788   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:43.863604   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:44.010123   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:44.088746   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:44.362808   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:44.510158   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:44.586507   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:45.126682   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:45.128086   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:45.129279   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:45.363544   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:45.509675   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:45.583966   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:45.863028   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:46.009604   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:46.080986   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:46.083686   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:46.363516   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:46.510685   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:46.584065   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:46.862870   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:47.010135   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:47.082926   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:47.366427   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:47.509478   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:47.585445   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:47.862584   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:48.011211   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:48.086687   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:48.364498   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:48.517170   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:48.584215   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:48.598043   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:48.863334   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:49.010354   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:49.085945   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:49.362273   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:49.510286   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:49.589629   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:49.863766   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:50.010975   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:50.083751   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:50.363348   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:50.510908   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:50.586349   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:50.587079   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:50.873202   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:51.010725   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:51.083254   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:51.362963   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:51.509768   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:51.583657   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:51.863530   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:52.010768   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:52.085003   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:52.363065   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:52.510250   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:52.594964   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:52.598947   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:52.862972   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:53.010656   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:53.083135   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:53.362941   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:53.509481   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:53.585594   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:53.865875   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:54.010733   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:54.085260   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:54.363359   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:54.510989   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:54.587783   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:54.863069   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:55.009807   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:55.083602   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:55.084028   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:55.849964   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:55.850160   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:55.850720   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:55.864091   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:56.009388   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:56.087124   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:56.363718   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:56.513385   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:56.583992   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:56.862828   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:57.009439   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:57.084165   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:57.363010   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:57.510486   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:57.582038   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:44:57.584815   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:57.862551   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:58.010916   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:58.084923   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:58.362547   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:58.510028   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:58.587093   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:58.863128   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:59.010872   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:59.083825   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:59.505799   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:44:59.511531   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:44:59.586063   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:44:59.862741   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:00.010138   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:00.080344   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:00.083092   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:00.363297   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:00.611455   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:00.613212   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:00.863878   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:01.010009   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:01.084993   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:01.364078   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:01.510871   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:01.585475   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:01.863328   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:02.010246   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:02.082152   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:02.085947   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:02.362324   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:02.509295   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:02.582738   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:02.864542   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:03.010135   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:03.089878   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:03.363390   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:03.509637   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:03.584921   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:03.862464   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:04.010536   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:04.084744   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:04.086062   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:04.363468   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:04.510097   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:04.582471   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:04.863218   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:05.010164   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:05.083791   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:05.363846   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:05.510888   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:05.583616   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:05.862008   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:06.211502   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:06.213800   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:06.215909   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:06.362926   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:06.510287   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:06.583514   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:06.862868   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:07.011034   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:07.084781   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:07.363214   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:07.510629   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:07.584486   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:07.869739   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:08.010357   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:08.083080   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:08.363253   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:08.510934   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:08.583045   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:08.584644   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:09.098061   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:09.099791   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:09.099899   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:09.373524   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:09.510052   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:09.583603   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:09.863152   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:10.009729   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:10.085533   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:10.366360   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:10.510829   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:10.584371   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:10.862810   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:11.010410   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:11.083993   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:11.084262   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:11.362321   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:11.510363   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:11.584208   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:11.864402   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:12.010520   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:12.083786   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:12.363365   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:12.511362   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:12.583632   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:12.862938   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:13.010377   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:13.088643   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:13.093419   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:13.366195   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:13.511230   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:13.591929   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:13.863862   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:14.016655   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:14.084980   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:14.363324   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:14.510009   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:14.584394   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:14.862482   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:15.010674   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:15.087663   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:15.363424   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:15.511758   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:15.585247   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:15.586369   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:15.862688   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:16.010725   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:16.122606   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:16.867632   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:16.869372   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:16.870543   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:16.880368   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:17.010690   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:17.087844   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:17.364581   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:17.509875   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:17.583745   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:17.868428   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:18.010534   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:18.082987   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:18.084490   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:18.367509   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:18.516375   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:18.591136   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:18.862763   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:19.010846   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:19.082463   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:19.362815   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:19.511954   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:19.585717   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:19.863225   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:20.010497   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:20.086533   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:20.363241   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:20.509687   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:20.582288   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:20.582510   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:20.867096   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:21.009745   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:21.092548   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:21.363625   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:21.509888   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:21.584075   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:21.863351   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:22.009882   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:22.083565   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:22.362166   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:22.510203   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:22.583600   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:22.863237   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:23.010279   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:23.087954   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:23.094980   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:23.727567   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:23.730258   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:23.730348   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:23.863022   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:24.010386   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:24.083177   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:24.363141   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:24.509583   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:24.586516   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:24.862789   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:25.010463   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:25.084204   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:25.362708   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:25.510900   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:25.591215   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:25.592950   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:25.863709   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:26.011445   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:26.083566   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:26.363521   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:26.510946   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:26.598082   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:26.870538   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:27.015074   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:27.089137   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:27.363106   98453 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0804 00:45:27.515236   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:27.586277   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:27.861886   98453 kapi.go:107] duration metric: took 1m15.003772127s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0804 00:45:28.016428   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:28.080923   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:28.083639   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:28.514044   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:28.595125   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:29.009986   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:29.083138   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:29.509609   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:29.583592   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:30.010205   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:30.082553   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:30.086581   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:30.509842   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:30.584262   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:31.010295   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:31.089403   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:31.511255   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:31.585994   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:32.010989   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0804 00:45:32.083005   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:32.083275   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:32.511444   98453 kapi.go:107] duration metric: took 1m17.005155983s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0804 00:45:32.513144   98453 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-474272 cluster.
	I0804 00:45:32.514831   98453 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0804 00:45:32.516380   98453 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0804 00:45:32.584367   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:33.082941   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:33.615458   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:34.083261   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:34.581662   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:34.584383   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:35.084180   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:35.589601   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:36.083561   98453 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0804 00:45:36.585174   98453 kapi.go:107] duration metric: took 1m22.506998244s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0804 00:45:36.586084   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:36.587285   98453 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, storage-provisioner-rancher, inspektor-gadget, helm-tiller, metrics-server, ingress-dns, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0804 00:45:36.588664   98453 addons.go:510] duration metric: took 1m32.876044139s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass storage-provisioner-rancher inspektor-gadget helm-tiller metrics-server ingress-dns yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0804 00:45:39.081304   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:41.085118   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:43.581790   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:46.081714   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:48.584628   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:51.084305   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:53.582180   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:56.081289   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:45:58.081873   98453 pod_ready.go:102] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"False"
	I0804 00:46:00.581762   98453 pod_ready.go:92] pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:00.581784   98453 pod_ready.go:81] duration metric: took 1m46.507106122s for pod "metrics-server-c59844bb4-q9wqv" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:00.581795   98453 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-jpv97" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:00.586944   98453 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-jpv97" in "kube-system" namespace has status "Ready":"True"
	I0804 00:46:00.586963   98453 pod_ready.go:81] duration metric: took 5.160944ms for pod "nvidia-device-plugin-daemonset-jpv97" in "kube-system" namespace to be "Ready" ...
	I0804 00:46:00.586980   98453 pod_ready.go:38] duration metric: took 1m47.719653312s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 00:46:00.587001   98453 api_server.go:52] waiting for apiserver process to appear ...
	I0804 00:46:00.587044   98453 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 00:46:00.587100   98453 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 00:46:00.650419   98453 cri.go:89] found id: "3ccc4ab2d974cc09574bf160a5ecfe1be01ac26289952afda50396b341b56650"
	I0804 00:46:00.650454   98453 cri.go:89] found id: ""
	I0804 00:46:00.650466   98453 logs.go:276] 1 containers: [3ccc4ab2d974cc09574bf160a5ecfe1be01ac26289952afda50396b341b56650]
	I0804 00:46:00.650530   98453 ssh_runner.go:195] Run: which crictl
	I0804 00:46:00.655451   98453 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 00:46:00.655518   98453 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 00:46:00.710714   98453 cri.go:89] found id: "3ed7c502eefdcef813783a097ee5f7a771e11b00df8b9e8c1f96b65ca45dacd4"
	I0804 00:46:00.710743   98453 cri.go:89] found id: ""
	I0804 00:46:00.710755   98453 logs.go:276] 1 containers: [3ed7c502eefdcef813783a097ee5f7a771e11b00df8b9e8c1f96b65ca45dacd4]
	I0804 00:46:00.710815   98453 ssh_runner.go:195] Run: which crictl
	I0804 00:46:00.715044   98453 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 00:46:00.715104   98453 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 00:46:00.753867   98453 cri.go:89] found id: "154520113e0c0ac62f75f240b8fdd952947c695ede4cd7f3fc0586e8cb983572"
	I0804 00:46:00.753898   98453 cri.go:89] found id: ""
	I0804 00:46:00.753908   98453 logs.go:276] 1 containers: [154520113e0c0ac62f75f240b8fdd952947c695ede4cd7f3fc0586e8cb983572]
	I0804 00:46:00.753959   98453 ssh_runner.go:195] Run: which crictl
	I0804 00:46:00.758620   98453 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 00:46:00.758685   98453 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 00:46:00.798291   98453 cri.go:89] found id: "dec88c09b82c2268b9f733d821fb11278f9b55af2875741da1adc9f9a4b340ad"
	I0804 00:46:00.798315   98453 cri.go:89] found id: ""
	I0804 00:46:00.798325   98453 logs.go:276] 1 containers: [dec88c09b82c2268b9f733d821fb11278f9b55af2875741da1adc9f9a4b340ad]
	I0804 00:46:00.798393   98453 ssh_runner.go:195] Run: which crictl
	I0804 00:46:00.804048   98453 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 00:46:00.804115   98453 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 00:46:00.844631   98453 cri.go:89] found id: "609d2e6e9bae12acd61e22064820337c41075d9742e846df9844acea8a0ce641"
	I0804 00:46:00.844654   98453 cri.go:89] found id: ""
	I0804 00:46:00.844670   98453 logs.go:276] 1 containers: [609d2e6e9bae12acd61e22064820337c41075d9742e846df9844acea8a0ce641]
	I0804 00:46:00.844718   98453 ssh_runner.go:195] Run: which crictl
	I0804 00:46:00.852087   98453 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 00:46:00.852150   98453 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 00:46:00.898388   98453 cri.go:89] found id: "59d80a2419fd6cf02ab496dbefaaad68beeb211b590f6458c24838d19edfc2ab"
	I0804 00:46:00.898422   98453 cri.go:89] found id: ""
	I0804 00:46:00.898434   98453 logs.go:276] 1 containers: [59d80a2419fd6cf02ab496dbefaaad68beeb211b590f6458c24838d19edfc2ab]
	I0804 00:46:00.898502   98453 ssh_runner.go:195] Run: which crictl
	I0804 00:46:00.905053   98453 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 00:46:00.905147   98453 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 00:46:00.948636   98453 cri.go:89] found id: ""
	I0804 00:46:00.948676   98453 logs.go:276] 0 containers: []
	W0804 00:46:00.948693   98453 logs.go:278] No container was found matching "kindnet"
	I0804 00:46:00.948707   98453 logs.go:123] Gathering logs for dmesg ...
	I0804 00:46:00.948726   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 00:46:00.963832   98453 logs.go:123] Gathering logs for kube-apiserver [3ccc4ab2d974cc09574bf160a5ecfe1be01ac26289952afda50396b341b56650] ...
	I0804 00:46:00.963882   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ccc4ab2d974cc09574bf160a5ecfe1be01ac26289952afda50396b341b56650"
	I0804 00:46:01.019024   98453 logs.go:123] Gathering logs for etcd [3ed7c502eefdcef813783a097ee5f7a771e11b00df8b9e8c1f96b65ca45dacd4] ...
	I0804 00:46:01.019063   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ed7c502eefdcef813783a097ee5f7a771e11b00df8b9e8c1f96b65ca45dacd4"
	I0804 00:46:01.085700   98453 logs.go:123] Gathering logs for coredns [154520113e0c0ac62f75f240b8fdd952947c695ede4cd7f3fc0586e8cb983572] ...
	I0804 00:46:01.085739   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 154520113e0c0ac62f75f240b8fdd952947c695ede4cd7f3fc0586e8cb983572"
	I0804 00:46:01.122662   98453 logs.go:123] Gathering logs for kube-scheduler [dec88c09b82c2268b9f733d821fb11278f9b55af2875741da1adc9f9a4b340ad] ...
	I0804 00:46:01.122692   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dec88c09b82c2268b9f733d821fb11278f9b55af2875741da1adc9f9a4b340ad"
	I0804 00:46:01.169873   98453 logs.go:123] Gathering logs for kube-proxy [609d2e6e9bae12acd61e22064820337c41075d9742e846df9844acea8a0ce641] ...
	I0804 00:46:01.169911   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 609d2e6e9bae12acd61e22064820337c41075d9742e846df9844acea8a0ce641"
	I0804 00:46:01.207985   98453 logs.go:123] Gathering logs for kubelet ...
	I0804 00:46:01.208023   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 00:46:01.274559   98453 logs.go:138] Found kubelet problem: Aug 04 00:44:15 addons-474272 kubelet[1267]: W0804 00:44:15.474271    1267 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-474272" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-474272' and this object
	W0804 00:46:01.274732   98453 logs.go:138] Found kubelet problem: Aug 04 00:44:15 addons-474272 kubelet[1267]: E0804 00:44:15.481480    1267 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-474272" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-474272' and this object
	I0804 00:46:01.294456   98453 logs.go:123] Gathering logs for describe nodes ...
	I0804 00:46:01.294480   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 00:46:01.437853   98453 logs.go:123] Gathering logs for kube-controller-manager [59d80a2419fd6cf02ab496dbefaaad68beeb211b590f6458c24838d19edfc2ab] ...
	I0804 00:46:01.437900   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59d80a2419fd6cf02ab496dbefaaad68beeb211b590f6458c24838d19edfc2ab"
	I0804 00:46:01.514213   98453 logs.go:123] Gathering logs for CRI-O ...
	I0804 00:46:01.514255   98453 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-474272 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 node stop m02 -v=7 --alsologtostderr
E0804 01:33:04.190487   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:34:26.111619   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.485002684s)

                                                
                                                
-- stdout --
	* Stopping node "ha-998889-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:32:25.753458  116550 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:32:25.753725  116550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:32:25.753737  116550 out.go:304] Setting ErrFile to fd 2...
	I0804 01:32:25.753744  116550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:32:25.753996  116550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:32:25.754278  116550 mustload.go:65] Loading cluster: ha-998889
	I0804 01:32:25.754623  116550 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:32:25.754638  116550 stop.go:39] StopHost: ha-998889-m02
	I0804 01:32:25.755054  116550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:32:25.755113  116550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:32:25.770890  116550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0804 01:32:25.771461  116550 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:32:25.772126  116550 main.go:141] libmachine: Using API Version  1
	I0804 01:32:25.772161  116550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:32:25.772580  116550 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:32:25.774763  116550 out.go:177] * Stopping node "ha-998889-m02"  ...
	I0804 01:32:25.776241  116550 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0804 01:32:25.776276  116550 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:32:25.776519  116550 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0804 01:32:25.776554  116550 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:32:25.779369  116550 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:32:25.780009  116550 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:32:25.780040  116550 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:32:25.780242  116550 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:32:25.780438  116550 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:32:25.780614  116550 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:32:25.780801  116550 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:32:25.872385  116550 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0804 01:32:25.927885  116550 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0804 01:32:25.983831  116550 main.go:141] libmachine: Stopping "ha-998889-m02"...
	I0804 01:32:25.983868  116550 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:32:25.985615  116550 main.go:141] libmachine: (ha-998889-m02) Calling .Stop
	I0804 01:32:25.989200  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 0/120
	I0804 01:32:26.990718  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 1/120
	I0804 01:32:27.992217  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 2/120
	I0804 01:32:28.993565  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 3/120
	I0804 01:32:29.995936  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 4/120
	I0804 01:32:30.997970  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 5/120
	I0804 01:32:31.999944  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 6/120
	I0804 01:32:33.001408  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 7/120
	I0804 01:32:34.002792  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 8/120
	I0804 01:32:35.004193  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 9/120
	I0804 01:32:36.006713  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 10/120
	I0804 01:32:37.008243  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 11/120
	I0804 01:32:38.009643  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 12/120
	I0804 01:32:39.011904  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 13/120
	I0804 01:32:40.014041  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 14/120
	I0804 01:32:41.015675  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 15/120
	I0804 01:32:42.017218  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 16/120
	I0804 01:32:43.018760  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 17/120
	I0804 01:32:44.020223  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 18/120
	I0804 01:32:45.021612  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 19/120
	I0804 01:32:46.023827  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 20/120
	I0804 01:32:47.026386  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 21/120
	I0804 01:32:48.027700  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 22/120
	I0804 01:32:49.029521  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 23/120
	I0804 01:32:50.031094  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 24/120
	I0804 01:32:51.032629  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 25/120
	I0804 01:32:52.034199  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 26/120
	I0804 01:32:53.036113  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 27/120
	I0804 01:32:54.037640  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 28/120
	I0804 01:32:55.038938  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 29/120
	I0804 01:32:56.041122  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 30/120
	I0804 01:32:57.042518  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 31/120
	I0804 01:32:58.044191  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 32/120
	I0804 01:32:59.046387  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 33/120
	I0804 01:33:00.047980  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 34/120
	I0804 01:33:01.049582  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 35/120
	I0804 01:33:02.051947  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 36/120
	I0804 01:33:03.053210  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 37/120
	I0804 01:33:04.055024  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 38/120
	I0804 01:33:05.056650  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 39/120
	I0804 01:33:06.058717  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 40/120
	I0804 01:33:07.060903  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 41/120
	I0804 01:33:08.062107  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 42/120
	I0804 01:33:09.063847  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 43/120
	I0804 01:33:10.065528  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 44/120
	I0804 01:33:11.067212  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 45/120
	I0804 01:33:12.068837  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 46/120
	I0804 01:33:13.071039  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 47/120
	I0804 01:33:14.072675  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 48/120
	I0804 01:33:15.074078  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 49/120
	I0804 01:33:16.076299  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 50/120
	I0804 01:33:17.077426  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 51/120
	I0804 01:33:18.078876  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 52/120
	I0804 01:33:19.080119  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 53/120
	I0804 01:33:20.081617  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 54/120
	I0804 01:33:21.082997  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 55/120
	I0804 01:33:22.085182  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 56/120
	I0804 01:33:23.086560  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 57/120
	I0804 01:33:24.088112  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 58/120
	I0804 01:33:25.089627  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 59/120
	I0804 01:33:26.091895  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 60/120
	I0804 01:33:27.093368  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 61/120
	I0804 01:33:28.094697  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 62/120
	I0804 01:33:29.096251  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 63/120
	I0804 01:33:30.097682  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 64/120
	I0804 01:33:31.099671  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 65/120
	I0804 01:33:32.100977  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 66/120
	I0804 01:33:33.102419  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 67/120
	I0804 01:33:34.103902  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 68/120
	I0804 01:33:35.105532  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 69/120
	I0804 01:33:36.107638  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 70/120
	I0804 01:33:37.109894  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 71/120
	I0804 01:33:38.111341  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 72/120
	I0804 01:33:39.112599  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 73/120
	I0804 01:33:40.114005  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 74/120
	I0804 01:33:41.115992  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 75/120
	I0804 01:33:42.117476  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 76/120
	I0804 01:33:43.119050  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 77/120
	I0804 01:33:44.120671  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 78/120
	I0804 01:33:45.122104  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 79/120
	I0804 01:33:46.124491  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 80/120
	I0804 01:33:47.125755  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 81/120
	I0804 01:33:48.127933  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 82/120
	I0804 01:33:49.129500  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 83/120
	I0804 01:33:50.130812  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 84/120
	I0804 01:33:51.132596  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 85/120
	I0804 01:33:52.134144  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 86/120
	I0804 01:33:53.135905  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 87/120
	I0804 01:33:54.137317  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 88/120
	I0804 01:33:55.138757  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 89/120
	I0804 01:33:56.141042  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 90/120
	I0804 01:33:57.142292  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 91/120
	I0804 01:33:58.143639  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 92/120
	I0804 01:33:59.145061  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 93/120
	I0804 01:34:00.146446  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 94/120
	I0804 01:34:01.148296  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 95/120
	I0804 01:34:02.150587  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 96/120
	I0804 01:34:03.152159  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 97/120
	I0804 01:34:04.154462  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 98/120
	I0804 01:34:05.156050  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 99/120
	I0804 01:34:06.157816  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 100/120
	I0804 01:34:07.159561  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 101/120
	I0804 01:34:08.161024  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 102/120
	I0804 01:34:09.162466  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 103/120
	I0804 01:34:10.163873  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 104/120
	I0804 01:34:11.165809  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 105/120
	I0804 01:34:12.168031  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 106/120
	I0804 01:34:13.169394  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 107/120
	I0804 01:34:14.171551  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 108/120
	I0804 01:34:15.172999  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 109/120
	I0804 01:34:16.175310  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 110/120
	I0804 01:34:17.177084  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 111/120
	I0804 01:34:18.179334  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 112/120
	I0804 01:34:19.180981  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 113/120
	I0804 01:34:20.182463  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 114/120
	I0804 01:34:21.184956  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 115/120
	I0804 01:34:22.186822  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 116/120
	I0804 01:34:23.188617  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 117/120
	I0804 01:34:24.190821  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 118/120
	I0804 01:34:25.192114  116550 main.go:141] libmachine: (ha-998889-m02) Waiting for machine to stop 119/120
	I0804 01:34:26.193021  116550 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0804 01:34:26.193173  116550 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-998889 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 3 (19.214966481s)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-998889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:34:26.238137  116988 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:34:26.238416  116988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:34:26.238426  116988 out.go:304] Setting ErrFile to fd 2...
	I0804 01:34:26.238430  116988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:34:26.238620  116988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:34:26.238785  116988 out.go:298] Setting JSON to false
	I0804 01:34:26.238815  116988 mustload.go:65] Loading cluster: ha-998889
	I0804 01:34:26.238850  116988 notify.go:220] Checking for updates...
	I0804 01:34:26.239205  116988 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:34:26.239221  116988 status.go:255] checking status of ha-998889 ...
	I0804 01:34:26.239620  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:26.239676  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:26.255151  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39813
	I0804 01:34:26.255600  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:26.256241  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:26.256262  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:26.256723  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:26.256962  116988 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:34:26.259094  116988 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:34:26.259112  116988 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:34:26.259442  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:26.259485  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:26.276987  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0804 01:34:26.277451  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:26.277974  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:26.278003  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:26.278343  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:26.278625  116988 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:34:26.281403  116988 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:26.281853  116988 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:34:26.281890  116988 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:26.281981  116988 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:34:26.282301  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:26.282350  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:26.297513  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43607
	I0804 01:34:26.297998  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:26.298554  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:26.298576  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:26.298879  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:26.299097  116988 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:34:26.299295  116988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:26.299316  116988 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:34:26.302317  116988 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:26.302764  116988 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:34:26.302794  116988 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:26.302834  116988 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:34:26.302991  116988 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:34:26.303174  116988 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:34:26.303335  116988 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:34:26.391699  116988 ssh_runner.go:195] Run: systemctl --version
	I0804 01:34:26.399709  116988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:26.419971  116988 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:34:26.420000  116988 api_server.go:166] Checking apiserver status ...
	I0804 01:34:26.420033  116988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:34:26.437194  116988 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0804 01:34:26.447037  116988 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:34:26.447107  116988 ssh_runner.go:195] Run: ls
	I0804 01:34:26.451900  116988 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:34:26.458529  116988 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:34:26.458554  116988 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:34:26.458567  116988 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:34:26.458600  116988 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:34:26.458893  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:26.458942  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:26.474201  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I0804 01:34:26.474708  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:26.475198  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:26.475221  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:26.475554  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:26.475736  116988 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:34:26.477426  116988 status.go:330] ha-998889-m02 host status = "Running" (err=<nil>)
	I0804 01:34:26.477443  116988 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:34:26.477732  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:26.477764  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:26.493575  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0804 01:34:26.494030  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:26.494525  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:26.494545  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:26.494872  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:26.495067  116988 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:34:26.497739  116988 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:26.498174  116988 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:34:26.498225  116988 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:26.498344  116988 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:34:26.498669  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:26.498730  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:26.513444  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44699
	I0804 01:34:26.513818  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:26.514328  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:26.514349  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:26.514648  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:26.514823  116988 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:34:26.515059  116988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:26.515078  116988 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:34:26.518151  116988 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:26.518558  116988 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:34:26.518586  116988 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:26.518709  116988 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:34:26.518885  116988 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:34:26.519051  116988 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:34:26.519334  116988 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	W0804 01:34:45.025517  116988 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.200:22: connect: no route to host
	W0804 01:34:45.025626  116988 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0804 01:34:45.025646  116988 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:34:45.025657  116988 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0804 01:34:45.025677  116988 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:34:45.025685  116988 status.go:255] checking status of ha-998889-m03 ...
	I0804 01:34:45.025992  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:45.026054  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:45.043274  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33997
	I0804 01:34:45.043747  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:45.044241  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:45.044265  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:45.044555  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:45.044734  116988 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:34:45.046318  116988 status.go:330] ha-998889-m03 host status = "Running" (err=<nil>)
	I0804 01:34:45.046335  116988 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:34:45.046674  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:45.046723  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:45.061544  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I0804 01:34:45.061952  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:45.062396  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:45.062419  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:45.062816  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:45.062993  116988 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:34:45.065866  116988 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:45.066317  116988 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:34:45.066342  116988 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:45.066499  116988 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:34:45.066911  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:45.066982  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:45.082674  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I0804 01:34:45.083075  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:45.083537  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:45.083559  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:45.083869  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:45.084055  116988 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:34:45.084236  116988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:45.084257  116988 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:34:45.087049  116988 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:45.087521  116988 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:34:45.087552  116988 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:45.087695  116988 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:34:45.087847  116988 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:34:45.087992  116988 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:34:45.088151  116988 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:34:45.182587  116988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:45.202588  116988 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:34:45.202618  116988 api_server.go:166] Checking apiserver status ...
	I0804 01:34:45.202657  116988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:34:45.219012  116988 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0804 01:34:45.230177  116988 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:34:45.230245  116988 ssh_runner.go:195] Run: ls
	I0804 01:34:45.235246  116988 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:34:45.242236  116988 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:34:45.242262  116988 status.go:422] ha-998889-m03 apiserver status = Running (err=<nil>)
	I0804 01:34:45.242270  116988 status.go:257] ha-998889-m03 status: &{Name:ha-998889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:34:45.242285  116988 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:34:45.242570  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:45.242605  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:45.257515  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39357
	I0804 01:34:45.257948  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:45.258439  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:45.258465  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:45.258811  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:45.258984  116988 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:34:45.260595  116988 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:34:45.260613  116988 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:34:45.261004  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:45.261063  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:45.276199  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I0804 01:34:45.276600  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:45.277015  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:45.277038  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:45.277401  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:45.277556  116988 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:34:45.280221  116988 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:45.280583  116988 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:34:45.280610  116988 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:45.280745  116988 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:34:45.281089  116988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:45.281133  116988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:45.295848  116988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0804 01:34:45.296258  116988 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:45.296747  116988 main.go:141] libmachine: Using API Version  1
	I0804 01:34:45.296772  116988 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:45.297081  116988 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:45.297304  116988 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:34:45.297512  116988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:45.297530  116988 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:34:45.300452  116988 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:45.300869  116988 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:34:45.300913  116988 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:45.301060  116988 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:34:45.301239  116988 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:34:45.301449  116988 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:34:45.301581  116988 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:34:45.390288  116988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:45.407357  116988 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-998889 -n ha-998889
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-998889 logs -n 25: (1.519429538s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1256674419/001/cp-test_ha-998889-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889:/home/docker/cp-test_ha-998889-m03_ha-998889.txt                       |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889 sudo cat                                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889.txt                                 |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m02:/home/docker/cp-test_ha-998889-m03_ha-998889-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m02 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04:/home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m04 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp testdata/cp-test.txt                                                | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1256674419/001/cp-test_ha-998889-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889:/home/docker/cp-test_ha-998889-m04_ha-998889.txt                       |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889 sudo cat                                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889.txt                                 |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m02:/home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m02 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03:/home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m03 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-998889 node stop m02 -v=7                                                     | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 01:27:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 01:27:34.034390  112472 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:27:34.034628  112472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:27:34.034636  112472 out.go:304] Setting ErrFile to fd 2...
	I0804 01:27:34.034640  112472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:27:34.034808  112472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:27:34.035375  112472 out.go:298] Setting JSON to false
	I0804 01:27:34.036213  112472 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11398,"bootTime":1722723456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 01:27:34.036272  112472 start.go:139] virtualization: kvm guest
	I0804 01:27:34.038622  112472 out.go:177] * [ha-998889] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 01:27:34.039992  112472 notify.go:220] Checking for updates...
	I0804 01:27:34.039997  112472 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 01:27:34.041501  112472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 01:27:34.042842  112472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:27:34.044303  112472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:27:34.045687  112472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 01:27:34.047131  112472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 01:27:34.048733  112472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 01:27:34.085326  112472 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 01:27:34.086720  112472 start.go:297] selected driver: kvm2
	I0804 01:27:34.086738  112472 start.go:901] validating driver "kvm2" against <nil>
	I0804 01:27:34.086749  112472 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 01:27:34.087453  112472 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 01:27:34.087532  112472 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 01:27:34.102852  112472 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 01:27:34.102915  112472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 01:27:34.103181  112472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 01:27:34.103294  112472 cni.go:84] Creating CNI manager for ""
	I0804 01:27:34.103310  112472 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0804 01:27:34.103321  112472 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0804 01:27:34.103396  112472 start.go:340] cluster config:
	{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0804 01:27:34.103534  112472 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 01:27:34.105404  112472 out.go:177] * Starting "ha-998889" primary control-plane node in "ha-998889" cluster
	I0804 01:27:34.106666  112472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:27:34.106700  112472 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 01:27:34.106710  112472 cache.go:56] Caching tarball of preloaded images
	I0804 01:27:34.106791  112472 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 01:27:34.106809  112472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 01:27:34.107104  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:27:34.107123  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json: {Name:mkf33ef6ad14f588f0aced43adb897e0932e1149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:27:34.107254  112472 start.go:360] acquireMachinesLock for ha-998889: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 01:27:34.107280  112472 start.go:364] duration metric: took 14.445µs to acquireMachinesLock for "ha-998889"
	I0804 01:27:34.107296  112472 start.go:93] Provisioning new machine with config: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:27:34.107350  112472 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 01:27:34.109010  112472 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 01:27:34.109166  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:27:34.109212  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:27:34.123648  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0804 01:27:34.124111  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:27:34.124657  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:27:34.124688  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:27:34.125044  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:27:34.125269  112472 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:27:34.125439  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:34.125594  112472 start.go:159] libmachine.API.Create for "ha-998889" (driver="kvm2")
	I0804 01:27:34.125626  112472 client.go:168] LocalClient.Create starting
	I0804 01:27:34.125657  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem
	I0804 01:27:34.125688  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:27:34.125710  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:27:34.125765  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem
	I0804 01:27:34.125790  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:27:34.125803  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:27:34.125819  112472 main.go:141] libmachine: Running pre-create checks...
	I0804 01:27:34.125827  112472 main.go:141] libmachine: (ha-998889) Calling .PreCreateCheck
	I0804 01:27:34.126164  112472 main.go:141] libmachine: (ha-998889) Calling .GetConfigRaw
	I0804 01:27:34.126551  112472 main.go:141] libmachine: Creating machine...
	I0804 01:27:34.126565  112472 main.go:141] libmachine: (ha-998889) Calling .Create
	I0804 01:27:34.126711  112472 main.go:141] libmachine: (ha-998889) Creating KVM machine...
	I0804 01:27:34.128025  112472 main.go:141] libmachine: (ha-998889) DBG | found existing default KVM network
	I0804 01:27:34.128678  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.128531  112496 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0804 01:27:34.128711  112472 main.go:141] libmachine: (ha-998889) DBG | created network xml: 
	I0804 01:27:34.128741  112472 main.go:141] libmachine: (ha-998889) DBG | <network>
	I0804 01:27:34.128754  112472 main.go:141] libmachine: (ha-998889) DBG |   <name>mk-ha-998889</name>
	I0804 01:27:34.128764  112472 main.go:141] libmachine: (ha-998889) DBG |   <dns enable='no'/>
	I0804 01:27:34.128771  112472 main.go:141] libmachine: (ha-998889) DBG |   
	I0804 01:27:34.128780  112472 main.go:141] libmachine: (ha-998889) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0804 01:27:34.128798  112472 main.go:141] libmachine: (ha-998889) DBG |     <dhcp>
	I0804 01:27:34.128812  112472 main.go:141] libmachine: (ha-998889) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0804 01:27:34.128822  112472 main.go:141] libmachine: (ha-998889) DBG |     </dhcp>
	I0804 01:27:34.128829  112472 main.go:141] libmachine: (ha-998889) DBG |   </ip>
	I0804 01:27:34.128835  112472 main.go:141] libmachine: (ha-998889) DBG |   
	I0804 01:27:34.128842  112472 main.go:141] libmachine: (ha-998889) DBG | </network>
	I0804 01:27:34.128851  112472 main.go:141] libmachine: (ha-998889) DBG | 
	I0804 01:27:34.133686  112472 main.go:141] libmachine: (ha-998889) DBG | trying to create private KVM network mk-ha-998889 192.168.39.0/24...
	I0804 01:27:34.200185  112472 main.go:141] libmachine: (ha-998889) Setting up store path in /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889 ...
	I0804 01:27:34.200212  112472 main.go:141] libmachine: (ha-998889) DBG | private KVM network mk-ha-998889 192.168.39.0/24 created
	I0804 01:27:34.200223  112472 main.go:141] libmachine: (ha-998889) Building disk image from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 01:27:34.200261  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.200108  112496 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:27:34.200297  112472 main.go:141] libmachine: (ha-998889) Downloading /home/jenkins/minikube-integration/19364-90243/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 01:27:34.476534  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.476353  112496 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa...
	I0804 01:27:34.626294  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.626120  112496 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/ha-998889.rawdisk...
	I0804 01:27:34.626331  112472 main.go:141] libmachine: (ha-998889) DBG | Writing magic tar header
	I0804 01:27:34.626373  112472 main.go:141] libmachine: (ha-998889) DBG | Writing SSH key tar header
	I0804 01:27:34.626402  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.626283  112496 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889 ...
	I0804 01:27:34.626416  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889 (perms=drwx------)
	I0804 01:27:34.626434  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889
	I0804 01:27:34.626445  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines
	I0804 01:27:34.626452  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines (perms=drwxr-xr-x)
	I0804 01:27:34.626461  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:27:34.626473  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243
	I0804 01:27:34.626485  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 01:27:34.626496  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube (perms=drwxr-xr-x)
	I0804 01:27:34.626509  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins
	I0804 01:27:34.626520  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home
	I0804 01:27:34.626533  112472 main.go:141] libmachine: (ha-998889) DBG | Skipping /home - not owner
	I0804 01:27:34.626543  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243 (perms=drwxrwxr-x)
	I0804 01:27:34.626551  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 01:27:34.626556  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 01:27:34.626565  112472 main.go:141] libmachine: (ha-998889) Creating domain...
	I0804 01:27:34.627790  112472 main.go:141] libmachine: (ha-998889) define libvirt domain using xml: 
	I0804 01:27:34.627809  112472 main.go:141] libmachine: (ha-998889) <domain type='kvm'>
	I0804 01:27:34.627816  112472 main.go:141] libmachine: (ha-998889)   <name>ha-998889</name>
	I0804 01:27:34.627825  112472 main.go:141] libmachine: (ha-998889)   <memory unit='MiB'>2200</memory>
	I0804 01:27:34.627830  112472 main.go:141] libmachine: (ha-998889)   <vcpu>2</vcpu>
	I0804 01:27:34.627840  112472 main.go:141] libmachine: (ha-998889)   <features>
	I0804 01:27:34.627846  112472 main.go:141] libmachine: (ha-998889)     <acpi/>
	I0804 01:27:34.627852  112472 main.go:141] libmachine: (ha-998889)     <apic/>
	I0804 01:27:34.627860  112472 main.go:141] libmachine: (ha-998889)     <pae/>
	I0804 01:27:34.627868  112472 main.go:141] libmachine: (ha-998889)     
	I0804 01:27:34.627897  112472 main.go:141] libmachine: (ha-998889)   </features>
	I0804 01:27:34.627904  112472 main.go:141] libmachine: (ha-998889)   <cpu mode='host-passthrough'>
	I0804 01:27:34.627929  112472 main.go:141] libmachine: (ha-998889)   
	I0804 01:27:34.627952  112472 main.go:141] libmachine: (ha-998889)   </cpu>
	I0804 01:27:34.627961  112472 main.go:141] libmachine: (ha-998889)   <os>
	I0804 01:27:34.627974  112472 main.go:141] libmachine: (ha-998889)     <type>hvm</type>
	I0804 01:27:34.627995  112472 main.go:141] libmachine: (ha-998889)     <boot dev='cdrom'/>
	I0804 01:27:34.628013  112472 main.go:141] libmachine: (ha-998889)     <boot dev='hd'/>
	I0804 01:27:34.628022  112472 main.go:141] libmachine: (ha-998889)     <bootmenu enable='no'/>
	I0804 01:27:34.628029  112472 main.go:141] libmachine: (ha-998889)   </os>
	I0804 01:27:34.628037  112472 main.go:141] libmachine: (ha-998889)   <devices>
	I0804 01:27:34.628048  112472 main.go:141] libmachine: (ha-998889)     <disk type='file' device='cdrom'>
	I0804 01:27:34.628064  112472 main.go:141] libmachine: (ha-998889)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/boot2docker.iso'/>
	I0804 01:27:34.628102  112472 main.go:141] libmachine: (ha-998889)       <target dev='hdc' bus='scsi'/>
	I0804 01:27:34.628118  112472 main.go:141] libmachine: (ha-998889)       <readonly/>
	I0804 01:27:34.628128  112472 main.go:141] libmachine: (ha-998889)     </disk>
	I0804 01:27:34.628138  112472 main.go:141] libmachine: (ha-998889)     <disk type='file' device='disk'>
	I0804 01:27:34.628152  112472 main.go:141] libmachine: (ha-998889)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 01:27:34.628174  112472 main.go:141] libmachine: (ha-998889)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/ha-998889.rawdisk'/>
	I0804 01:27:34.628184  112472 main.go:141] libmachine: (ha-998889)       <target dev='hda' bus='virtio'/>
	I0804 01:27:34.628190  112472 main.go:141] libmachine: (ha-998889)     </disk>
	I0804 01:27:34.628199  112472 main.go:141] libmachine: (ha-998889)     <interface type='network'>
	I0804 01:27:34.628208  112472 main.go:141] libmachine: (ha-998889)       <source network='mk-ha-998889'/>
	I0804 01:27:34.628213  112472 main.go:141] libmachine: (ha-998889)       <model type='virtio'/>
	I0804 01:27:34.628218  112472 main.go:141] libmachine: (ha-998889)     </interface>
	I0804 01:27:34.628224  112472 main.go:141] libmachine: (ha-998889)     <interface type='network'>
	I0804 01:27:34.628233  112472 main.go:141] libmachine: (ha-998889)       <source network='default'/>
	I0804 01:27:34.628245  112472 main.go:141] libmachine: (ha-998889)       <model type='virtio'/>
	I0804 01:27:34.628265  112472 main.go:141] libmachine: (ha-998889)     </interface>
	I0804 01:27:34.628295  112472 main.go:141] libmachine: (ha-998889)     <serial type='pty'>
	I0804 01:27:34.628320  112472 main.go:141] libmachine: (ha-998889)       <target port='0'/>
	I0804 01:27:34.628334  112472 main.go:141] libmachine: (ha-998889)     </serial>
	I0804 01:27:34.628342  112472 main.go:141] libmachine: (ha-998889)     <console type='pty'>
	I0804 01:27:34.628357  112472 main.go:141] libmachine: (ha-998889)       <target type='serial' port='0'/>
	I0804 01:27:34.628387  112472 main.go:141] libmachine: (ha-998889)     </console>
	I0804 01:27:34.628398  112472 main.go:141] libmachine: (ha-998889)     <rng model='virtio'>
	I0804 01:27:34.628411  112472 main.go:141] libmachine: (ha-998889)       <backend model='random'>/dev/random</backend>
	I0804 01:27:34.628427  112472 main.go:141] libmachine: (ha-998889)     </rng>
	I0804 01:27:34.628438  112472 main.go:141] libmachine: (ha-998889)     
	I0804 01:27:34.628445  112472 main.go:141] libmachine: (ha-998889)     
	I0804 01:27:34.628456  112472 main.go:141] libmachine: (ha-998889)   </devices>
	I0804 01:27:34.628465  112472 main.go:141] libmachine: (ha-998889) </domain>
	I0804 01:27:34.628476  112472 main.go:141] libmachine: (ha-998889) 
	I0804 01:27:34.634476  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:a4:06:fd in network default
	I0804 01:27:34.635130  112472 main.go:141] libmachine: (ha-998889) Ensuring networks are active...
	I0804 01:27:34.635154  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:34.635863  112472 main.go:141] libmachine: (ha-998889) Ensuring network default is active
	I0804 01:27:34.636220  112472 main.go:141] libmachine: (ha-998889) Ensuring network mk-ha-998889 is active
	I0804 01:27:34.636687  112472 main.go:141] libmachine: (ha-998889) Getting domain xml...
	I0804 01:27:34.637514  112472 main.go:141] libmachine: (ha-998889) Creating domain...
	I0804 01:27:35.817970  112472 main.go:141] libmachine: (ha-998889) Waiting to get IP...
	I0804 01:27:35.818833  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:35.819223  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:35.819283  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:35.819224  112496 retry.go:31] will retry after 296.598754ms: waiting for machine to come up
	I0804 01:27:36.117830  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:36.118300  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:36.118325  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:36.118261  112496 retry.go:31] will retry after 256.62577ms: waiting for machine to come up
	I0804 01:27:36.376733  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:36.377268  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:36.377297  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:36.377194  112496 retry.go:31] will retry after 355.609942ms: waiting for machine to come up
	I0804 01:27:36.734884  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:36.735340  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:36.735366  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:36.735294  112496 retry.go:31] will retry after 478.320401ms: waiting for machine to come up
	I0804 01:27:37.214721  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:37.215102  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:37.215159  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:37.215057  112496 retry.go:31] will retry after 567.406004ms: waiting for machine to come up
	I0804 01:27:37.783807  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:37.784250  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:37.784279  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:37.784204  112496 retry.go:31] will retry after 758.01729ms: waiting for machine to come up
	I0804 01:27:38.544371  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:38.544908  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:38.544944  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:38.544730  112496 retry.go:31] will retry after 823.463269ms: waiting for machine to come up
	I0804 01:27:39.369409  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:39.369811  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:39.369841  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:39.369759  112496 retry.go:31] will retry after 1.463845637s: waiting for machine to come up
	I0804 01:27:40.835396  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:40.835732  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:40.835760  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:40.835674  112496 retry.go:31] will retry after 1.816575461s: waiting for machine to come up
	I0804 01:27:42.654405  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:42.654827  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:42.654857  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:42.654774  112496 retry.go:31] will retry after 1.40027298s: waiting for machine to come up
	I0804 01:27:44.057276  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:44.057718  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:44.057744  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:44.057677  112496 retry.go:31] will retry after 2.379743455s: waiting for machine to come up
	I0804 01:27:46.439422  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:46.439732  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:46.439758  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:46.439684  112496 retry.go:31] will retry after 3.528768878s: waiting for machine to come up
	I0804 01:27:49.969771  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:49.970248  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:49.970276  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:49.970195  112496 retry.go:31] will retry after 3.073877797s: waiting for machine to come up
	I0804 01:27:53.047398  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:53.047739  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:53.047761  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:53.047682  112496 retry.go:31] will retry after 4.825115092s: waiting for machine to come up
	I0804 01:27:57.876864  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:57.877277  112472 main.go:141] libmachine: (ha-998889) Found IP for machine: 192.168.39.12
	I0804 01:27:57.877303  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has current primary IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:57.877309  112472 main.go:141] libmachine: (ha-998889) Reserving static IP address...
	I0804 01:27:57.877836  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find host DHCP lease matching {name: "ha-998889", mac: "52:54:00:3a:37:c1", ip: "192.168.39.12"} in network mk-ha-998889
	I0804 01:27:57.950737  112472 main.go:141] libmachine: (ha-998889) DBG | Getting to WaitForSSH function...
	I0804 01:27:57.950766  112472 main.go:141] libmachine: (ha-998889) Reserved static IP address: 192.168.39.12
	I0804 01:27:57.950779  112472 main.go:141] libmachine: (ha-998889) Waiting for SSH to be available...
	I0804 01:27:57.953549  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:57.953969  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:57.953997  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:57.954301  112472 main.go:141] libmachine: (ha-998889) DBG | Using SSH client type: external
	I0804 01:27:57.954324  112472 main.go:141] libmachine: (ha-998889) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa (-rw-------)
	I0804 01:27:57.954367  112472 main.go:141] libmachine: (ha-998889) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 01:27:57.954386  112472 main.go:141] libmachine: (ha-998889) DBG | About to run SSH command:
	I0804 01:27:57.954402  112472 main.go:141] libmachine: (ha-998889) DBG | exit 0
	I0804 01:27:58.081404  112472 main.go:141] libmachine: (ha-998889) DBG | SSH cmd err, output: <nil>: 
	I0804 01:27:58.081657  112472 main.go:141] libmachine: (ha-998889) KVM machine creation complete!
	I0804 01:27:58.081974  112472 main.go:141] libmachine: (ha-998889) Calling .GetConfigRaw
	I0804 01:27:58.082535  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:58.082730  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:58.082964  112472 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 01:27:58.082976  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:27:58.084487  112472 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 01:27:58.084503  112472 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 01:27:58.084511  112472 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 01:27:58.084545  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.086802  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.087131  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.087155  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.087277  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.087400  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.087510  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.087654  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.087831  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.088075  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.088092  112472 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 01:27:58.196986  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:27:58.197012  112472 main.go:141] libmachine: Detecting the provisioner...
	I0804 01:27:58.197023  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.199725  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.200144  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.200174  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.200323  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.200526  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.200669  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.200790  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.200958  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.201211  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.201225  112472 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 01:27:58.310564  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 01:27:58.310645  112472 main.go:141] libmachine: found compatible host: buildroot
	I0804 01:27:58.310651  112472 main.go:141] libmachine: Provisioning with buildroot...
	I0804 01:27:58.310658  112472 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:27:58.310944  112472 buildroot.go:166] provisioning hostname "ha-998889"
	I0804 01:27:58.310976  112472 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:27:58.311169  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.313818  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.314187  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.314208  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.314413  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.314644  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.314830  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.314980  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.315179  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.315386  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.315401  112472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-998889 && echo "ha-998889" | sudo tee /etc/hostname
	I0804 01:27:58.440622  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889
	
	I0804 01:27:58.440651  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.443388  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.443772  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.443803  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.444011  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.444222  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.444377  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.444554  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.444740  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.444917  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.444933  112472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-998889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-998889/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-998889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 01:27:58.562313  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:27:58.562345  112472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 01:27:58.562385  112472 buildroot.go:174] setting up certificates
	I0804 01:27:58.562394  112472 provision.go:84] configureAuth start
	I0804 01:27:58.562403  112472 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:27:58.562700  112472 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:27:58.565414  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.565784  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.565827  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.566055  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.568162  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.568441  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.568485  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.568560  112472 provision.go:143] copyHostCerts
	I0804 01:27:58.568601  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:27:58.568635  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 01:27:58.568643  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:27:58.568706  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 01:27:58.568791  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:27:58.568811  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 01:27:58.568815  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:27:58.568839  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 01:27:58.568874  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:27:58.568888  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 01:27:58.568891  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:27:58.568916  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 01:27:58.568957  112472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.ha-998889 san=[127.0.0.1 192.168.39.12 ha-998889 localhost minikube]
	I0804 01:27:58.649203  112472 provision.go:177] copyRemoteCerts
	I0804 01:27:58.649275  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 01:27:58.649302  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.652682  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.653144  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.653168  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.653369  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.653554  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.653734  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.653902  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:27:58.739651  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 01:27:58.739722  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 01:27:58.762637  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 01:27:58.762710  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0804 01:27:58.785185  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 01:27:58.785278  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 01:27:58.807674  112472 provision.go:87] duration metric: took 245.265863ms to configureAuth
	I0804 01:27:58.807705  112472 buildroot.go:189] setting minikube options for container-runtime
	I0804 01:27:58.807885  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:27:58.807967  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.810489  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.810816  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.810846  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.811001  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.811293  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.811472  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.811633  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.811813  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.812018  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.812036  112472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 01:27:59.081281  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 01:27:59.081315  112472 main.go:141] libmachine: Checking connection to Docker...
	I0804 01:27:59.081326  112472 main.go:141] libmachine: (ha-998889) Calling .GetURL
	I0804 01:27:59.082745  112472 main.go:141] libmachine: (ha-998889) DBG | Using libvirt version 6000000
	I0804 01:27:59.084971  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.085294  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.085320  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.085480  112472 main.go:141] libmachine: Docker is up and running!
	I0804 01:27:59.085514  112472 main.go:141] libmachine: Reticulating splines...
	I0804 01:27:59.085527  112472 client.go:171] duration metric: took 24.959888572s to LocalClient.Create
	I0804 01:27:59.085561  112472 start.go:167] duration metric: took 24.95996898s to libmachine.API.Create "ha-998889"
	I0804 01:27:59.085574  112472 start.go:293] postStartSetup for "ha-998889" (driver="kvm2")
	I0804 01:27:59.085588  112472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 01:27:59.085614  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.085881  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 01:27:59.085909  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:59.087964  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.088220  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.088245  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.088406  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:59.088563  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.088717  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:59.088917  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:27:59.173983  112472 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 01:27:59.178400  112472 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 01:27:59.178430  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 01:27:59.178495  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 01:27:59.178601  112472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 01:27:59.178613  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 01:27:59.178743  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 01:27:59.190203  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:27:59.216209  112472 start.go:296] duration metric: took 130.616918ms for postStartSetup
	I0804 01:27:59.216259  112472 main.go:141] libmachine: (ha-998889) Calling .GetConfigRaw
	I0804 01:27:59.216863  112472 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:27:59.219616  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.220035  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.220056  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.220309  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:27:59.220511  112472 start.go:128] duration metric: took 25.113151184s to createHost
	I0804 01:27:59.220534  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:59.222940  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.223136  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.223167  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.223325  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:59.223491  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.223643  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.223755  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:59.223940  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:59.224112  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:59.224130  112472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 01:27:59.334253  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722734879.314213638
	
	I0804 01:27:59.334277  112472 fix.go:216] guest clock: 1722734879.314213638
	I0804 01:27:59.334284  112472 fix.go:229] Guest: 2024-08-04 01:27:59.314213638 +0000 UTC Remote: 2024-08-04 01:27:59.220523818 +0000 UTC m=+25.222386029 (delta=93.68982ms)
	I0804 01:27:59.334306  112472 fix.go:200] guest clock delta is within tolerance: 93.68982ms
	I0804 01:27:59.334311  112472 start.go:83] releasing machines lock for "ha-998889", held for 25.227022794s
	I0804 01:27:59.334328  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.334582  112472 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:27:59.337372  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.337817  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.337843  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.338000  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.338680  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.338907  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.339026  112472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 01:27:59.339068  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:59.339186  112472 ssh_runner.go:195] Run: cat /version.json
	I0804 01:27:59.339211  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:59.341918  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.341939  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.342330  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.342357  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.342428  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.342459  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:59.342467  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.342662  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.342676  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:59.342855  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.342870  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:59.343021  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:59.343064  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:27:59.343140  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:27:59.441857  112472 ssh_runner.go:195] Run: systemctl --version
	I0804 01:27:59.447801  112472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 01:27:59.608632  112472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 01:27:59.615401  112472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 01:27:59.615478  112472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 01:27:59.631843  112472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 01:27:59.631872  112472 start.go:495] detecting cgroup driver to use...
	I0804 01:27:59.631949  112472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 01:27:59.647341  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 01:27:59.661296  112472 docker.go:217] disabling cri-docker service (if available) ...
	I0804 01:27:59.661370  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 01:27:59.675596  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 01:27:59.689634  112472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 01:27:59.803349  112472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 01:27:59.942225  112472 docker.go:233] disabling docker service ...
	I0804 01:27:59.942310  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 01:27:59.957083  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 01:27:59.970098  112472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 01:28:00.108965  112472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 01:28:00.230198  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 01:28:00.244364  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 01:28:00.262827  112472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 01:28:00.262883  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.273379  112472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 01:28:00.273443  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.284065  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.294637  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.305280  112472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 01:28:00.316420  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.327255  112472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.344330  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.355505  112472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 01:28:00.366051  112472 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 01:28:00.366132  112472 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 01:28:00.379276  112472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 01:28:00.389069  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:28:00.507815  112472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 01:28:00.642273  112472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 01:28:00.642363  112472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 01:28:00.647404  112472 start.go:563] Will wait 60s for crictl version
	I0804 01:28:00.647470  112472 ssh_runner.go:195] Run: which crictl
	I0804 01:28:00.651326  112472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 01:28:00.691325  112472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 01:28:00.691405  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:28:00.719613  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:28:00.749170  112472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 01:28:00.750657  112472 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:28:00.753475  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:00.753835  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:28:00.753865  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:00.754065  112472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 01:28:00.758441  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:28:00.771564  112472 kubeadm.go:883] updating cluster {Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 01:28:00.771673  112472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:28:00.771772  112472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 01:28:00.803244  112472 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 01:28:00.803317  112472 ssh_runner.go:195] Run: which lz4
	I0804 01:28:00.807363  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0804 01:28:00.807453  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 01:28:00.811445  112472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 01:28:00.811471  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 01:28:02.217727  112472 crio.go:462] duration metric: took 1.410295481s to copy over tarball
	I0804 01:28:02.217811  112472 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 01:28:04.389307  112472 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.171464178s)
	I0804 01:28:04.389337  112472 crio.go:469] duration metric: took 2.171577201s to extract the tarball
	I0804 01:28:04.389345  112472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 01:28:04.429170  112472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 01:28:04.482945  112472 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 01:28:04.482971  112472 cache_images.go:84] Images are preloaded, skipping loading
	I0804 01:28:04.482979  112472 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.30.3 crio true true} ...
	I0804 01:28:04.483107  112472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-998889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 01:28:04.483200  112472 ssh_runner.go:195] Run: crio config
	I0804 01:28:04.532700  112472 cni.go:84] Creating CNI manager for ""
	I0804 01:28:04.532721  112472 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0804 01:28:04.532733  112472 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 01:28:04.532756  112472 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-998889 NodeName:ha-998889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 01:28:04.532953  112472 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-998889"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 01:28:04.532995  112472 kube-vip.go:115] generating kube-vip config ...
	I0804 01:28:04.533045  112472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 01:28:04.552308  112472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 01:28:04.552441  112472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0804 01:28:04.552507  112472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 01:28:04.563501  112472 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 01:28:04.563592  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0804 01:28:04.573610  112472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0804 01:28:04.590467  112472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 01:28:04.607300  112472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0804 01:28:04.624655  112472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0804 01:28:04.641481  112472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 01:28:04.645541  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:28:04.658825  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:28:04.796838  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:28:04.815145  112472 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889 for IP: 192.168.39.12
	I0804 01:28:04.815182  112472 certs.go:194] generating shared ca certs ...
	I0804 01:28:04.815204  112472 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:04.815403  112472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 01:28:04.815446  112472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 01:28:04.815456  112472 certs.go:256] generating profile certs ...
	I0804 01:28:04.815511  112472 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key
	I0804 01:28:04.815530  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt with IP's: []
	I0804 01:28:04.940009  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt ...
	I0804 01:28:04.940038  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt: {Name:mk79fa1e4ae1118cf8f8c0c19ef697182e8e9377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:04.940226  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key ...
	I0804 01:28:04.940240  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key: {Name:mkf7d9a24b1ec2627891807d54c289d2bfd23b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:04.940316  112472 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.0fad81cc
	I0804 01:28:04.940331  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.0fad81cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.254]
	I0804 01:28:05.009427  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.0fad81cc ...
	I0804 01:28:05.009456  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.0fad81cc: {Name:mk86e869e2e67e118d26f58ab0277fe9fca1ae8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:05.009611  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.0fad81cc ...
	I0804 01:28:05.009626  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.0fad81cc: {Name:mkc1460bc2d558f3afc3fb170f119d6e0e4da2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:05.009695  112472 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.0fad81cc -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt
	I0804 01:28:05.009786  112472 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.0fad81cc -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key
	I0804 01:28:05.009845  112472 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key
	I0804 01:28:05.009861  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt with IP's: []
	I0804 01:28:05.178241  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt ...
	I0804 01:28:05.178275  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt: {Name:mk30715d33d423e2f3b5a89adcfd91e99c30f659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:05.178439  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key ...
	I0804 01:28:05.178449  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key: {Name:mkc8177c06a3f681ba706656a57bcbc40c783550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:05.178517  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 01:28:05.178534  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 01:28:05.178544  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 01:28:05.178558  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 01:28:05.178573  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 01:28:05.178586  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 01:28:05.178599  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 01:28:05.178608  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 01:28:05.178656  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 01:28:05.178693  112472 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 01:28:05.178702  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 01:28:05.178725  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 01:28:05.178749  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 01:28:05.178769  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 01:28:05.178807  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:28:05.178839  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:05.178852  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 01:28:05.178864  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 01:28:05.179448  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 01:28:05.205893  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 01:28:05.230021  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 01:28:05.255588  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 01:28:05.280581  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 01:28:05.305073  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 01:28:05.328855  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 01:28:05.353197  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 01:28:05.378515  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 01:28:05.402783  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 01:28:05.427163  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 01:28:05.452026  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 01:28:05.469012  112472 ssh_runner.go:195] Run: openssl version
	I0804 01:28:05.475114  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 01:28:05.485908  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 01:28:05.490320  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 01:28:05.490378  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 01:28:05.496393  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 01:28:05.507321  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 01:28:05.517854  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 01:28:05.522274  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 01:28:05.522312  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 01:28:05.527830  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 01:28:05.538239  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 01:28:05.548946  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:05.553710  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:05.553782  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:05.559731  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 01:28:05.570905  112472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 01:28:05.575037  112472 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 01:28:05.575098  112472 kubeadm.go:392] StartCluster: {Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:28:05.575214  112472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 01:28:05.575271  112472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 01:28:05.633443  112472 cri.go:89] found id: ""
	I0804 01:28:05.633513  112472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 01:28:05.651461  112472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 01:28:05.670980  112472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 01:28:05.683207  112472 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 01:28:05.683231  112472 kubeadm.go:157] found existing configuration files:
	
	I0804 01:28:05.683289  112472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 01:28:05.693330  112472 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 01:28:05.693409  112472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 01:28:05.703503  112472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 01:28:05.713494  112472 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 01:28:05.713594  112472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 01:28:05.723579  112472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 01:28:05.733641  112472 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 01:28:05.733697  112472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 01:28:05.743835  112472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 01:28:05.753948  112472 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 01:28:05.754007  112472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 01:28:05.764492  112472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 01:28:05.875281  112472 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0804 01:28:05.875374  112472 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 01:28:06.001567  112472 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 01:28:06.001761  112472 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 01:28:06.001898  112472 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 01:28:06.218175  112472 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 01:28:06.397461  112472 out.go:204]   - Generating certificates and keys ...
	I0804 01:28:06.397596  112472 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 01:28:06.397670  112472 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 01:28:06.397772  112472 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 01:28:06.441750  112472 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 01:28:06.891891  112472 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 01:28:06.999877  112472 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 01:28:07.158478  112472 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 01:28:07.158751  112472 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-998889 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0804 01:28:07.336591  112472 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 01:28:07.336808  112472 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-998889 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0804 01:28:07.503189  112472 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 01:28:07.724675  112472 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 01:28:08.127674  112472 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 01:28:08.127969  112472 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 01:28:08.391458  112472 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 01:28:08.511434  112472 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 01:28:08.701182  112472 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 01:28:08.804919  112472 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 01:28:08.956483  112472 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 01:28:08.957068  112472 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 01:28:08.959575  112472 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 01:28:08.961875  112472 out.go:204]   - Booting up control plane ...
	I0804 01:28:08.961985  112472 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 01:28:08.962077  112472 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 01:28:08.962173  112472 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 01:28:08.980748  112472 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 01:28:08.983505  112472 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 01:28:08.983585  112472 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 01:28:09.112364  112472 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 01:28:09.112471  112472 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0804 01:28:09.613884  112472 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.094392ms
	I0804 01:28:09.613972  112472 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 01:28:15.601597  112472 kubeadm.go:310] [api-check] The API server is healthy after 5.990804115s
	I0804 01:28:15.617412  112472 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 01:28:15.636486  112472 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 01:28:15.668429  112472 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 01:28:15.668645  112472 kubeadm.go:310] [mark-control-plane] Marking the node ha-998889 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 01:28:15.685753  112472 kubeadm.go:310] [bootstrap-token] Using token: 6isgoe.8x9m8twbydje2d0l
	I0804 01:28:15.687214  112472 out.go:204]   - Configuring RBAC rules ...
	I0804 01:28:15.687354  112472 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 01:28:15.700905  112472 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 01:28:15.717628  112472 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 01:28:15.721175  112472 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 01:28:15.724694  112472 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 01:28:15.728491  112472 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 01:28:16.008898  112472 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 01:28:16.446887  112472 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 01:28:17.009123  112472 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 01:28:17.009149  112472 kubeadm.go:310] 
	I0804 01:28:17.009213  112472 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 01:28:17.009221  112472 kubeadm.go:310] 
	I0804 01:28:17.009311  112472 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 01:28:17.009319  112472 kubeadm.go:310] 
	I0804 01:28:17.009344  112472 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 01:28:17.009469  112472 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 01:28:17.009557  112472 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 01:28:17.009568  112472 kubeadm.go:310] 
	I0804 01:28:17.009646  112472 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 01:28:17.009656  112472 kubeadm.go:310] 
	I0804 01:28:17.009745  112472 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 01:28:17.009761  112472 kubeadm.go:310] 
	I0804 01:28:17.009828  112472 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 01:28:17.009896  112472 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 01:28:17.009956  112472 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 01:28:17.009962  112472 kubeadm.go:310] 
	I0804 01:28:17.010035  112472 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 01:28:17.010098  112472 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 01:28:17.010104  112472 kubeadm.go:310] 
	I0804 01:28:17.010174  112472 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6isgoe.8x9m8twbydje2d0l \
	I0804 01:28:17.010280  112472 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e \
	I0804 01:28:17.010300  112472 kubeadm.go:310] 	--control-plane 
	I0804 01:28:17.010304  112472 kubeadm.go:310] 
	I0804 01:28:17.010371  112472 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 01:28:17.010377  112472 kubeadm.go:310] 
	I0804 01:28:17.010451  112472 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6isgoe.8x9m8twbydje2d0l \
	I0804 01:28:17.010535  112472 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e 
	I0804 01:28:17.011131  112472 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 01:28:17.011158  112472 cni.go:84] Creating CNI manager for ""
	I0804 01:28:17.011167  112472 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0804 01:28:17.014169  112472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0804 01:28:17.015659  112472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0804 01:28:17.021041  112472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0804 01:28:17.021064  112472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0804 01:28:17.043824  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0804 01:28:17.417299  112472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 01:28:17.417390  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:17.417391  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-998889 minikube.k8s.io/updated_at=2024_08_04T01_28_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-998889 minikube.k8s.io/primary=true
	I0804 01:28:17.455986  112472 ops.go:34] apiserver oom_adj: -16
	I0804 01:28:17.615580  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:18.115739  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:18.616056  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:19.115979  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:19.616474  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:20.116435  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:20.615936  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:21.115963  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:21.616474  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:22.115724  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:22.616173  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:23.116602  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:23.616301  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:24.116484  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:24.616677  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:25.116304  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:25.616250  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:26.116434  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:26.615730  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:27.116005  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:27.616356  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:28.116650  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:28.616666  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:29.115952  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:29.616060  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:29.733039  112472 kubeadm.go:1113] duration metric: took 12.31573191s to wait for elevateKubeSystemPrivileges
	I0804 01:28:29.733085  112472 kubeadm.go:394] duration metric: took 24.157991663s to StartCluster
	I0804 01:28:29.733110  112472 settings.go:142] acquiring lock: {Name:mkf532aceb8d8524495256eb01b2b67c117281c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:29.733210  112472 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:28:29.734249  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/kubeconfig: {Name:mk9db0d5521301bbe44f571d0153ba4b675d0242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:29.734513  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0804 01:28:29.734516  112472 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:28:29.734544  112472 start.go:241] waiting for startup goroutines ...
	I0804 01:28:29.734566  112472 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 01:28:29.734646  112472 addons.go:69] Setting storage-provisioner=true in profile "ha-998889"
	I0804 01:28:29.734659  112472 addons.go:69] Setting default-storageclass=true in profile "ha-998889"
	I0804 01:28:29.734687  112472 addons.go:234] Setting addon storage-provisioner=true in "ha-998889"
	I0804 01:28:29.734706  112472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-998889"
	I0804 01:28:29.734723  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:28:29.734739  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:28:29.735117  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.735149  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.735168  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.735182  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.750614  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37503
	I0804 01:28:29.751009  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I0804 01:28:29.751245  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.751525  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.751743  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.751763  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.752055  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.752071  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.752110  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.752386  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.752568  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:28:29.752625  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.752666  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.754809  112472 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:28:29.755181  112472 kapi.go:59] client config for ha-998889: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key", CAFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 01:28:29.755705  112472 cert_rotation.go:137] Starting client certificate rotation controller
	I0804 01:28:29.755999  112472 addons.go:234] Setting addon default-storageclass=true in "ha-998889"
	I0804 01:28:29.756055  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:28:29.756485  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.756532  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.768633  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0804 01:28:29.769157  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.769723  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.769746  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.770106  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.770309  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:28:29.771991  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:28:29.773402  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38081
	I0804 01:28:29.773795  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.773872  112472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 01:28:29.774250  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.774273  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.774612  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.775085  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.775128  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.775208  112472 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 01:28:29.775238  112472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 01:28:29.775259  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:28:29.778293  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:29.778671  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:28:29.778732  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:29.778826  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:28:29.779026  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:28:29.779194  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:28:29.779349  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:28:29.790238  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46387
	I0804 01:28:29.790725  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.791193  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.791217  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.791530  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.791721  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:28:29.793348  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:28:29.793620  112472 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 01:28:29.793637  112472 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 01:28:29.793657  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:28:29.796210  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:29.796581  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:28:29.796602  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:29.796779  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:28:29.796947  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:28:29.797114  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:28:29.797257  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:28:29.865785  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0804 01:28:29.958212  112472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 01:28:29.968317  112472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 01:28:30.252653  112472 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0804 01:28:30.270098  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.270125  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.270433  112472 main.go:141] libmachine: (ha-998889) DBG | Closing plugin on server side
	I0804 01:28:30.270503  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.270524  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.270537  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.270548  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.270810  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.270824  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.270982  112472 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0804 01:28:30.270990  112472 round_trippers.go:469] Request Headers:
	I0804 01:28:30.271001  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:28:30.271007  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:28:30.278575  112472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0804 01:28:30.279171  112472 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0804 01:28:30.279186  112472 round_trippers.go:469] Request Headers:
	I0804 01:28:30.279193  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:28:30.279197  112472 round_trippers.go:473]     Content-Type: application/json
	I0804 01:28:30.279200  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:28:30.281943  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:28:30.282103  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.282113  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.282345  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.282364  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.282366  112472 main.go:141] libmachine: (ha-998889) DBG | Closing plugin on server side
	I0804 01:28:30.475645  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.475673  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.476028  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.476048  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.476057  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.476064  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.476319  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.476330  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.478005  112472 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0804 01:28:30.479238  112472 addons.go:510] duration metric: took 744.675262ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0804 01:28:30.479272  112472 start.go:246] waiting for cluster config update ...
	I0804 01:28:30.479285  112472 start.go:255] writing updated cluster config ...
	I0804 01:28:30.480863  112472 out.go:177] 
	I0804 01:28:30.482606  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:28:30.482684  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:28:30.484303  112472 out.go:177] * Starting "ha-998889-m02" control-plane node in "ha-998889" cluster
	I0804 01:28:30.485460  112472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:28:30.485492  112472 cache.go:56] Caching tarball of preloaded images
	I0804 01:28:30.485599  112472 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 01:28:30.485624  112472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 01:28:30.485730  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:28:30.486496  112472 start.go:360] acquireMachinesLock for ha-998889-m02: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 01:28:30.486550  112472 start.go:364] duration metric: took 31.213µs to acquireMachinesLock for "ha-998889-m02"
	I0804 01:28:30.486565  112472 start.go:93] Provisioning new machine with config: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:28:30.486638  112472 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0804 01:28:30.488066  112472 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 01:28:30.488167  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:30.488208  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:30.503256  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0804 01:28:30.503667  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:30.504160  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:30.504194  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:30.504538  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:30.504781  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetMachineName
	I0804 01:28:30.505051  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:30.505230  112472 start.go:159] libmachine.API.Create for "ha-998889" (driver="kvm2")
	I0804 01:28:30.505265  112472 client.go:168] LocalClient.Create starting
	I0804 01:28:30.505305  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem
	I0804 01:28:30.505394  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:28:30.505426  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:28:30.505500  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem
	I0804 01:28:30.505528  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:28:30.505544  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:28:30.505566  112472 main.go:141] libmachine: Running pre-create checks...
	I0804 01:28:30.505577  112472 main.go:141] libmachine: (ha-998889-m02) Calling .PreCreateCheck
	I0804 01:28:30.505766  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetConfigRaw
	I0804 01:28:30.506203  112472 main.go:141] libmachine: Creating machine...
	I0804 01:28:30.506221  112472 main.go:141] libmachine: (ha-998889-m02) Calling .Create
	I0804 01:28:30.506352  112472 main.go:141] libmachine: (ha-998889-m02) Creating KVM machine...
	I0804 01:28:30.507660  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found existing default KVM network
	I0804 01:28:30.507826  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found existing private KVM network mk-ha-998889
	I0804 01:28:30.507953  112472 main.go:141] libmachine: (ha-998889-m02) Setting up store path in /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02 ...
	I0804 01:28:30.507983  112472 main.go:141] libmachine: (ha-998889-m02) Building disk image from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 01:28:30.508117  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:30.507974  112881 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:28:30.508168  112472 main.go:141] libmachine: (ha-998889-m02) Downloading /home/jenkins/minikube-integration/19364-90243/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 01:28:30.761338  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:30.761188  112881 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa...
	I0804 01:28:30.919696  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:30.919552  112881 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/ha-998889-m02.rawdisk...
	I0804 01:28:30.919735  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Writing magic tar header
	I0804 01:28:30.919751  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Writing SSH key tar header
	I0804 01:28:30.919764  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:30.919688  112881 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02 ...
	I0804 01:28:30.919871  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02
	I0804 01:28:30.919904  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02 (perms=drwx------)
	I0804 01:28:30.919919  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines
	I0804 01:28:30.919943  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines (perms=drwxr-xr-x)
	I0804 01:28:30.919962  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:28:30.919973  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube (perms=drwxr-xr-x)
	I0804 01:28:30.919989  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243 (perms=drwxrwxr-x)
	I0804 01:28:30.920002  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 01:28:30.920012  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243
	I0804 01:28:30.920027  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 01:28:30.920037  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins
	I0804 01:28:30.920050  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home
	I0804 01:28:30.920060  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Skipping /home - not owner
	I0804 01:28:30.920097  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 01:28:30.920116  112472 main.go:141] libmachine: (ha-998889-m02) Creating domain...
	I0804 01:28:30.921063  112472 main.go:141] libmachine: (ha-998889-m02) define libvirt domain using xml: 
	I0804 01:28:30.921081  112472 main.go:141] libmachine: (ha-998889-m02) <domain type='kvm'>
	I0804 01:28:30.921091  112472 main.go:141] libmachine: (ha-998889-m02)   <name>ha-998889-m02</name>
	I0804 01:28:30.921099  112472 main.go:141] libmachine: (ha-998889-m02)   <memory unit='MiB'>2200</memory>
	I0804 01:28:30.921107  112472 main.go:141] libmachine: (ha-998889-m02)   <vcpu>2</vcpu>
	I0804 01:28:30.921113  112472 main.go:141] libmachine: (ha-998889-m02)   <features>
	I0804 01:28:30.921123  112472 main.go:141] libmachine: (ha-998889-m02)     <acpi/>
	I0804 01:28:30.921127  112472 main.go:141] libmachine: (ha-998889-m02)     <apic/>
	I0804 01:28:30.921135  112472 main.go:141] libmachine: (ha-998889-m02)     <pae/>
	I0804 01:28:30.921140  112472 main.go:141] libmachine: (ha-998889-m02)     
	I0804 01:28:30.921148  112472 main.go:141] libmachine: (ha-998889-m02)   </features>
	I0804 01:28:30.921153  112472 main.go:141] libmachine: (ha-998889-m02)   <cpu mode='host-passthrough'>
	I0804 01:28:30.921159  112472 main.go:141] libmachine: (ha-998889-m02)   
	I0804 01:28:30.921164  112472 main.go:141] libmachine: (ha-998889-m02)   </cpu>
	I0804 01:28:30.921171  112472 main.go:141] libmachine: (ha-998889-m02)   <os>
	I0804 01:28:30.921176  112472 main.go:141] libmachine: (ha-998889-m02)     <type>hvm</type>
	I0804 01:28:30.921183  112472 main.go:141] libmachine: (ha-998889-m02)     <boot dev='cdrom'/>
	I0804 01:28:30.921188  112472 main.go:141] libmachine: (ha-998889-m02)     <boot dev='hd'/>
	I0804 01:28:30.921194  112472 main.go:141] libmachine: (ha-998889-m02)     <bootmenu enable='no'/>
	I0804 01:28:30.921198  112472 main.go:141] libmachine: (ha-998889-m02)   </os>
	I0804 01:28:30.921203  112472 main.go:141] libmachine: (ha-998889-m02)   <devices>
	I0804 01:28:30.921210  112472 main.go:141] libmachine: (ha-998889-m02)     <disk type='file' device='cdrom'>
	I0804 01:28:30.921218  112472 main.go:141] libmachine: (ha-998889-m02)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/boot2docker.iso'/>
	I0804 01:28:30.921225  112472 main.go:141] libmachine: (ha-998889-m02)       <target dev='hdc' bus='scsi'/>
	I0804 01:28:30.921230  112472 main.go:141] libmachine: (ha-998889-m02)       <readonly/>
	I0804 01:28:30.921236  112472 main.go:141] libmachine: (ha-998889-m02)     </disk>
	I0804 01:28:30.921242  112472 main.go:141] libmachine: (ha-998889-m02)     <disk type='file' device='disk'>
	I0804 01:28:30.921250  112472 main.go:141] libmachine: (ha-998889-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 01:28:30.921262  112472 main.go:141] libmachine: (ha-998889-m02)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/ha-998889-m02.rawdisk'/>
	I0804 01:28:30.921269  112472 main.go:141] libmachine: (ha-998889-m02)       <target dev='hda' bus='virtio'/>
	I0804 01:28:30.921274  112472 main.go:141] libmachine: (ha-998889-m02)     </disk>
	I0804 01:28:30.921281  112472 main.go:141] libmachine: (ha-998889-m02)     <interface type='network'>
	I0804 01:28:30.921287  112472 main.go:141] libmachine: (ha-998889-m02)       <source network='mk-ha-998889'/>
	I0804 01:28:30.921294  112472 main.go:141] libmachine: (ha-998889-m02)       <model type='virtio'/>
	I0804 01:28:30.921299  112472 main.go:141] libmachine: (ha-998889-m02)     </interface>
	I0804 01:28:30.921306  112472 main.go:141] libmachine: (ha-998889-m02)     <interface type='network'>
	I0804 01:28:30.921325  112472 main.go:141] libmachine: (ha-998889-m02)       <source network='default'/>
	I0804 01:28:30.921332  112472 main.go:141] libmachine: (ha-998889-m02)       <model type='virtio'/>
	I0804 01:28:30.921337  112472 main.go:141] libmachine: (ha-998889-m02)     </interface>
	I0804 01:28:30.921343  112472 main.go:141] libmachine: (ha-998889-m02)     <serial type='pty'>
	I0804 01:28:30.921349  112472 main.go:141] libmachine: (ha-998889-m02)       <target port='0'/>
	I0804 01:28:30.921368  112472 main.go:141] libmachine: (ha-998889-m02)     </serial>
	I0804 01:28:30.921380  112472 main.go:141] libmachine: (ha-998889-m02)     <console type='pty'>
	I0804 01:28:30.921391  112472 main.go:141] libmachine: (ha-998889-m02)       <target type='serial' port='0'/>
	I0804 01:28:30.921412  112472 main.go:141] libmachine: (ha-998889-m02)     </console>
	I0804 01:28:30.921430  112472 main.go:141] libmachine: (ha-998889-m02)     <rng model='virtio'>
	I0804 01:28:30.921440  112472 main.go:141] libmachine: (ha-998889-m02)       <backend model='random'>/dev/random</backend>
	I0804 01:28:30.921445  112472 main.go:141] libmachine: (ha-998889-m02)     </rng>
	I0804 01:28:30.921450  112472 main.go:141] libmachine: (ha-998889-m02)     
	I0804 01:28:30.921456  112472 main.go:141] libmachine: (ha-998889-m02)     
	I0804 01:28:30.921461  112472 main.go:141] libmachine: (ha-998889-m02)   </devices>
	I0804 01:28:30.921466  112472 main.go:141] libmachine: (ha-998889-m02) </domain>
	I0804 01:28:30.921495  112472 main.go:141] libmachine: (ha-998889-m02) 
	I0804 01:28:30.929778  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:15:1a:27 in network default
	I0804 01:28:30.930433  112472 main.go:141] libmachine: (ha-998889-m02) Ensuring networks are active...
	I0804 01:28:30.930454  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:30.931330  112472 main.go:141] libmachine: (ha-998889-m02) Ensuring network default is active
	I0804 01:28:30.931670  112472 main.go:141] libmachine: (ha-998889-m02) Ensuring network mk-ha-998889 is active
	I0804 01:28:30.932110  112472 main.go:141] libmachine: (ha-998889-m02) Getting domain xml...
	I0804 01:28:30.933052  112472 main.go:141] libmachine: (ha-998889-m02) Creating domain...
	I0804 01:28:32.149109  112472 main.go:141] libmachine: (ha-998889-m02) Waiting to get IP...
	I0804 01:28:32.150031  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:32.150399  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:32.150455  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:32.150381  112881 retry.go:31] will retry after 268.179165ms: waiting for machine to come up
	I0804 01:28:32.419905  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:32.420328  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:32.420372  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:32.420289  112881 retry.go:31] will retry after 367.807233ms: waiting for machine to come up
	I0804 01:28:32.790173  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:32.790611  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:32.790644  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:32.790569  112881 retry.go:31] will retry after 425.29844ms: waiting for machine to come up
	I0804 01:28:33.217193  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:33.217673  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:33.217701  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:33.217622  112881 retry.go:31] will retry after 456.348174ms: waiting for machine to come up
	I0804 01:28:33.675237  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:33.675694  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:33.675719  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:33.675643  112881 retry.go:31] will retry after 744.6172ms: waiting for machine to come up
	I0804 01:28:34.421724  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:34.422221  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:34.422255  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:34.422180  112881 retry.go:31] will retry after 953.022328ms: waiting for machine to come up
	I0804 01:28:35.377632  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:35.378080  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:35.378120  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:35.378025  112881 retry.go:31] will retry after 727.937271ms: waiting for machine to come up
	I0804 01:28:36.107712  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:36.108227  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:36.108268  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:36.108150  112881 retry.go:31] will retry after 1.033849143s: waiting for machine to come up
	I0804 01:28:37.143498  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:37.143943  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:37.143962  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:37.143922  112881 retry.go:31] will retry after 1.350606885s: waiting for machine to come up
	I0804 01:28:38.495904  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:38.496349  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:38.496367  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:38.496308  112881 retry.go:31] will retry after 1.90273357s: waiting for machine to come up
	I0804 01:28:40.401125  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:40.401637  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:40.401670  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:40.401581  112881 retry.go:31] will retry after 2.647896385s: waiting for machine to come up
	I0804 01:28:43.052964  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:43.053480  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:43.053511  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:43.053422  112881 retry.go:31] will retry after 2.25124518s: waiting for machine to come up
	I0804 01:28:45.307295  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:45.307695  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:45.307730  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:45.307650  112881 retry.go:31] will retry after 4.396427726s: waiting for machine to come up
	I0804 01:28:49.706546  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:49.706941  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:49.706985  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:49.706909  112881 retry.go:31] will retry after 4.887319809s: waiting for machine to come up
	I0804 01:28:54.595364  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.595847  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has current primary IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.595873  112472 main.go:141] libmachine: (ha-998889-m02) Found IP for machine: 192.168.39.200
	I0804 01:28:54.595892  112472 main.go:141] libmachine: (ha-998889-m02) Reserving static IP address...
	I0804 01:28:54.596265  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find host DHCP lease matching {name: "ha-998889-m02", mac: "52:54:00:bf:26:17", ip: "192.168.39.200"} in network mk-ha-998889
	I0804 01:28:54.669966  112472 main.go:141] libmachine: (ha-998889-m02) Reserved static IP address: 192.168.39.200
	I0804 01:28:54.670001  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Getting to WaitForSSH function...
	I0804 01:28:54.670011  112472 main.go:141] libmachine: (ha-998889-m02) Waiting for SSH to be available...
	I0804 01:28:54.672968  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.673435  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:54.673465  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.673571  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Using SSH client type: external
	I0804 01:28:54.673596  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa (-rw-------)
	I0804 01:28:54.673631  112472 main.go:141] libmachine: (ha-998889-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 01:28:54.673644  112472 main.go:141] libmachine: (ha-998889-m02) DBG | About to run SSH command:
	I0804 01:28:54.673661  112472 main.go:141] libmachine: (ha-998889-m02) DBG | exit 0
	I0804 01:28:54.801760  112472 main.go:141] libmachine: (ha-998889-m02) DBG | SSH cmd err, output: <nil>: 
	I0804 01:28:54.802063  112472 main.go:141] libmachine: (ha-998889-m02) KVM machine creation complete!
	I0804 01:28:54.802368  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetConfigRaw
	I0804 01:28:54.802882  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:54.803073  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:54.803244  112472 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 01:28:54.803257  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:28:54.804651  112472 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 01:28:54.804672  112472 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 01:28:54.804678  112472 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 01:28:54.804684  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:54.807078  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.807437  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:54.807464  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.807584  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:54.807763  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:54.807893  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:54.808025  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:54.808217  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:54.808418  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:54.808429  112472 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 01:28:54.916476  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:28:54.916504  112472 main.go:141] libmachine: Detecting the provisioner...
	I0804 01:28:54.916512  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:54.919614  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.920107  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:54.920132  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.920376  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:54.920594  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:54.920750  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:54.920911  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:54.921127  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:54.921409  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:54.921427  112472 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 01:28:55.026395  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 01:28:55.026504  112472 main.go:141] libmachine: found compatible host: buildroot
	I0804 01:28:55.026517  112472 main.go:141] libmachine: Provisioning with buildroot...
	I0804 01:28:55.026530  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetMachineName
	I0804 01:28:55.026852  112472 buildroot.go:166] provisioning hostname "ha-998889-m02"
	I0804 01:28:55.026884  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetMachineName
	I0804 01:28:55.027051  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.030120  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.030560  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.030590  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.030755  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.030985  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.031160  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.031338  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.031502  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:55.031702  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:55.031718  112472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-998889-m02 && echo "ha-998889-m02" | sudo tee /etc/hostname
	I0804 01:28:55.153923  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889-m02
	
	I0804 01:28:55.153955  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.156619  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.156986  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.157029  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.157243  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.157477  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.157651  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.157767  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.157911  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:55.158137  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:55.158154  112472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-998889-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-998889-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-998889-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 01:28:55.277469  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:28:55.277508  112472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 01:28:55.277527  112472 buildroot.go:174] setting up certificates
	I0804 01:28:55.277539  112472 provision.go:84] configureAuth start
	I0804 01:28:55.277553  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetMachineName
	I0804 01:28:55.277902  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:28:55.280624  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.281054  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.281079  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.281327  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.283605  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.283962  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.283992  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.284117  112472 provision.go:143] copyHostCerts
	I0804 01:28:55.284151  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:28:55.284207  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 01:28:55.284217  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:28:55.284282  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 01:28:55.284369  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:28:55.284386  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 01:28:55.284393  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:28:55.284416  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 01:28:55.284506  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:28:55.284527  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 01:28:55.284531  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:28:55.284556  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 01:28:55.284616  112472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.ha-998889-m02 san=[127.0.0.1 192.168.39.200 ha-998889-m02 localhost minikube]
	I0804 01:28:55.370416  112472 provision.go:177] copyRemoteCerts
	I0804 01:28:55.370480  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 01:28:55.370506  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.373305  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.373706  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.373740  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.373908  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.374089  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.374214  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.374334  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:28:55.455658  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 01:28:55.455756  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 01:28:55.481778  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 01:28:55.481879  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 01:28:55.505846  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 01:28:55.505919  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 01:28:55.530149  112472 provision.go:87] duration metric: took 252.586948ms to configureAuth
	I0804 01:28:55.530186  112472 buildroot.go:189] setting minikube options for container-runtime
	I0804 01:28:55.530406  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:28:55.530556  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.533389  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.533826  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.533857  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.534022  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.534248  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.534388  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.534569  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.534765  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:55.534982  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:55.535004  112472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 01:28:55.805013  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 01:28:55.805045  112472 main.go:141] libmachine: Checking connection to Docker...
	I0804 01:28:55.805053  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetURL
	I0804 01:28:55.806487  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Using libvirt version 6000000
	I0804 01:28:55.808907  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.809254  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.809275  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.809474  112472 main.go:141] libmachine: Docker is up and running!
	I0804 01:28:55.809493  112472 main.go:141] libmachine: Reticulating splines...
	I0804 01:28:55.809502  112472 client.go:171] duration metric: took 25.304226093s to LocalClient.Create
	I0804 01:28:55.809533  112472 start.go:167] duration metric: took 25.304304839s to libmachine.API.Create "ha-998889"
	I0804 01:28:55.809545  112472 start.go:293] postStartSetup for "ha-998889-m02" (driver="kvm2")
	I0804 01:28:55.809558  112472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 01:28:55.809592  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:55.809860  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 01:28:55.809886  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.811927  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.812234  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.812260  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.812385  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.812597  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.812759  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.812937  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:28:55.896262  112472 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 01:28:55.901088  112472 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 01:28:55.901113  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 01:28:55.901189  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 01:28:55.901292  112472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 01:28:55.901307  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 01:28:55.901437  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 01:28:55.911162  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:28:55.935681  112472 start.go:296] duration metric: took 126.119459ms for postStartSetup
	I0804 01:28:55.935742  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetConfigRaw
	I0804 01:28:55.936546  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:28:55.939881  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.940391  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.940422  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.940670  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:28:55.940907  112472 start.go:128] duration metric: took 25.454257234s to createHost
	I0804 01:28:55.940935  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.943420  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.943758  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.943783  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.943962  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.944144  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.944349  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.944531  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.944700  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:55.944900  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:55.944914  112472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 01:28:56.050260  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722734936.026904772
	
	I0804 01:28:56.050285  112472 fix.go:216] guest clock: 1722734936.026904772
	I0804 01:28:56.050296  112472 fix.go:229] Guest: 2024-08-04 01:28:56.026904772 +0000 UTC Remote: 2024-08-04 01:28:55.94092076 +0000 UTC m=+81.942782970 (delta=85.984012ms)
	I0804 01:28:56.050317  112472 fix.go:200] guest clock delta is within tolerance: 85.984012ms
	I0804 01:28:56.050324  112472 start.go:83] releasing machines lock for "ha-998889-m02", held for 25.563767731s
	I0804 01:28:56.050350  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:56.050643  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:28:56.053141  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.053574  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:56.053596  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.055845  112472 out.go:177] * Found network options:
	I0804 01:28:56.057415  112472 out.go:177]   - NO_PROXY=192.168.39.12
	W0804 01:28:56.058561  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 01:28:56.058602  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:56.059197  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:56.059409  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:56.059516  112472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 01:28:56.059557  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	W0804 01:28:56.059633  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 01:28:56.059717  112472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 01:28:56.059744  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:56.062277  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.062338  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.062590  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:56.062609  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.062632  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:56.062648  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.062810  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:56.062982  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:56.063073  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:56.063148  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:56.063210  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:56.063287  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:56.063342  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:28:56.063396  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:28:56.304368  112472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 01:28:56.311725  112472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 01:28:56.311804  112472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 01:28:56.328673  112472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 01:28:56.328701  112472 start.go:495] detecting cgroup driver to use...
	I0804 01:28:56.328768  112472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 01:28:56.346593  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 01:28:56.362206  112472 docker.go:217] disabling cri-docker service (if available) ...
	I0804 01:28:56.362264  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 01:28:56.376930  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 01:28:56.391727  112472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 01:28:56.519492  112472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 01:28:56.680072  112472 docker.go:233] disabling docker service ...
	I0804 01:28:56.680171  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 01:28:56.695362  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 01:28:56.709491  112472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 01:28:56.829866  112472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 01:28:56.947379  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 01:28:56.961963  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 01:28:56.980015  112472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 01:28:56.980086  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:56.991285  112472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 01:28:56.991362  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.003712  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.015998  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.029215  112472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 01:28:57.041461  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.052536  112472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.070434  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.081642  112472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 01:28:57.091874  112472 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 01:28:57.091931  112472 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 01:28:57.106309  112472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 01:28:57.116586  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:28:57.240378  112472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 01:28:57.374852  112472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 01:28:57.374944  112472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 01:28:57.380338  112472 start.go:563] Will wait 60s for crictl version
	I0804 01:28:57.380413  112472 ssh_runner.go:195] Run: which crictl
	I0804 01:28:57.384825  112472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 01:28:57.426828  112472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 01:28:57.426926  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:28:57.455982  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:28:57.485984  112472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 01:28:57.487486  112472 out.go:177]   - env NO_PROXY=192.168.39.12
	I0804 01:28:57.488688  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:28:57.491091  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:57.491401  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:57.491429  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:57.491581  112472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 01:28:57.495938  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:28:57.508732  112472 mustload.go:65] Loading cluster: ha-998889
	I0804 01:28:57.508983  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:28:57.509252  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:57.509302  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:57.524594  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45191
	I0804 01:28:57.525539  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:57.526011  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:57.526031  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:57.526386  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:57.526592  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:28:57.528097  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:28:57.528435  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:57.528491  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:57.544362  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0804 01:28:57.544824  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:57.545302  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:57.545327  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:57.545694  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:57.545959  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:28:57.546205  112472 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889 for IP: 192.168.39.200
	I0804 01:28:57.546218  112472 certs.go:194] generating shared ca certs ...
	I0804 01:28:57.546233  112472 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:57.546371  112472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 01:28:57.546412  112472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 01:28:57.546422  112472 certs.go:256] generating profile certs ...
	I0804 01:28:57.546483  112472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key
	I0804 01:28:57.546510  112472 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cef94706
	I0804 01:28:57.546524  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cef94706 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.200 192.168.39.254]
	I0804 01:28:57.952681  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cef94706 ...
	I0804 01:28:57.952711  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cef94706: {Name:mk16aa54dedad4e240fa220451742f589cf5420b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:57.952910  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cef94706 ...
	I0804 01:28:57.952928  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cef94706: {Name:mkb647fef86cc95a64e2aca9905e764b6b7263b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:57.953036  112472 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cef94706 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt
	I0804 01:28:57.953171  112472 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cef94706 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key
	I0804 01:28:57.953302  112472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key
	I0804 01:28:57.953322  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 01:28:57.953336  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 01:28:57.953350  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 01:28:57.953408  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 01:28:57.953423  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 01:28:57.953436  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 01:28:57.953450  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 01:28:57.953469  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 01:28:57.953536  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 01:28:57.953576  112472 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 01:28:57.953586  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 01:28:57.953619  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 01:28:57.953663  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 01:28:57.953695  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 01:28:57.953747  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:28:57.953791  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 01:28:57.953815  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:57.953841  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 01:28:57.953885  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:28:57.956832  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:57.957242  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:28:57.957268  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:57.957428  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:28:57.957638  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:28:57.957820  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:28:57.957952  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:28:58.033777  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0804 01:28:58.038767  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0804 01:28:58.050460  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0804 01:28:58.055757  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0804 01:28:58.066499  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0804 01:28:58.071462  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0804 01:28:58.082123  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0804 01:28:58.086434  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0804 01:28:58.097739  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0804 01:28:58.103591  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0804 01:28:58.115120  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0804 01:28:58.120728  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0804 01:28:58.132224  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 01:28:58.169844  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 01:28:58.193921  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 01:28:58.217944  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 01:28:58.241393  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0804 01:28:58.266903  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 01:28:58.291393  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 01:28:58.315927  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 01:28:58.340516  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 01:28:58.366622  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 01:28:58.392075  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 01:28:58.416933  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0804 01:28:58.435506  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0804 01:28:58.452323  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0804 01:28:58.469418  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0804 01:28:58.485933  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0804 01:28:58.502667  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0804 01:28:58.519181  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0804 01:28:58.535798  112472 ssh_runner.go:195] Run: openssl version
	I0804 01:28:58.541695  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 01:28:58.552808  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:58.557427  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:58.557490  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:58.563334  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 01:28:58.574153  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 01:28:58.585290  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 01:28:58.590009  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 01:28:58.590102  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 01:28:58.596387  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 01:28:58.608361  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 01:28:58.619806  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 01:28:58.624852  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 01:28:58.624943  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 01:28:58.630829  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 01:28:58.642877  112472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 01:28:58.647833  112472 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 01:28:58.647884  112472 kubeadm.go:934] updating node {m02 192.168.39.200 8443 v1.30.3 crio true true} ...
	I0804 01:28:58.647985  112472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-998889-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 01:28:58.648017  112472 kube-vip.go:115] generating kube-vip config ...
	I0804 01:28:58.648059  112472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 01:28:58.668536  112472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 01:28:58.668613  112472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0804 01:28:58.668669  112472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 01:28:58.680647  112472 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0804 01:28:58.680724  112472 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0804 01:28:58.692441  112472 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0804 01:28:58.692470  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 01:28:58.692523  112472 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0804 01:28:58.692546  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 01:28:58.692523  112472 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0804 01:28:58.697122  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0804 01:28:58.697156  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0804 01:28:59.596488  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 01:28:59.596576  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 01:28:59.601893  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0804 01:28:59.601925  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0804 01:28:59.886375  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:28:59.902133  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 01:28:59.902257  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 01:28:59.906920  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0804 01:28:59.906962  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0804 01:29:00.313229  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0804 01:29:00.322995  112472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0804 01:29:00.340463  112472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 01:29:00.357581  112472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0804 01:29:00.374959  112472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 01:29:00.378987  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:29:00.392030  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:29:00.512075  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:29:00.530566  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:29:00.530967  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:29:00.531013  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:29:00.546784  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0804 01:29:00.547284  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:29:00.547808  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:29:00.547838  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:29:00.548162  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:29:00.548413  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:29:00.548612  112472 start.go:317] joinCluster: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:29:00.548708  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0804 01:29:00.548728  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:29:00.551822  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:29:00.552246  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:29:00.552273  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:29:00.552439  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:29:00.552637  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:29:00.552823  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:29:00.552993  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:29:00.710931  112472 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:29:00.710981  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tyjh8y.fzi76243575sf4so --discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-998889-m02 --control-plane --apiserver-advertise-address=192.168.39.200 --apiserver-bind-port=8443"
	I0804 01:29:23.405264  112472 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tyjh8y.fzi76243575sf4so --discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-998889-m02 --control-plane --apiserver-advertise-address=192.168.39.200 --apiserver-bind-port=8443": (22.694253426s)
	I0804 01:29:23.405319  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0804 01:29:23.849202  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-998889-m02 minikube.k8s.io/updated_at=2024_08_04T01_29_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-998889 minikube.k8s.io/primary=false
	I0804 01:29:23.995444  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-998889-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0804 01:29:24.121433  112472 start.go:319] duration metric: took 23.57281924s to joinCluster
	I0804 01:29:24.121519  112472 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:29:24.121802  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:29:24.123008  112472 out.go:177] * Verifying Kubernetes components...
	I0804 01:29:24.124632  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:29:24.388677  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:29:24.426816  112472 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:29:24.427177  112472 kapi.go:59] client config for ha-998889: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key", CAFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0804 01:29:24.427264  112472 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.12:8443
	I0804 01:29:24.427539  112472 node_ready.go:35] waiting up to 6m0s for node "ha-998889-m02" to be "Ready" ...
	I0804 01:29:24.427664  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:24.427674  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:24.427683  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:24.427691  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:24.437163  112472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0804 01:29:24.928231  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:24.928255  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:24.928267  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:24.928272  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:24.934840  112472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 01:29:25.427906  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:25.427932  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:25.427942  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:25.427947  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:25.431712  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:25.927883  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:25.927912  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:25.927923  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:25.927928  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:25.931522  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:26.428381  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:26.428403  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:26.428411  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:26.428415  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:26.431959  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:26.432693  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:26.928178  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:26.928209  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:26.928221  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:26.928228  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:26.931324  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:27.428382  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:27.428403  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:27.428412  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:27.428415  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:27.432110  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:27.927922  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:27.927948  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:27.927960  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:27.927966  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:27.932659  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:28.428674  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:28.428704  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:28.428716  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:28.428724  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:28.431795  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:28.927849  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:28.927877  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:28.927889  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:28.927897  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:28.931369  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:28.932010  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:29.427829  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:29.427852  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:29.427860  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:29.427864  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:29.431026  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:29.928621  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:29.928649  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:29.928659  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:29.928663  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:29.931784  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:30.428495  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:30.428517  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:30.428525  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:30.428530  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:30.432464  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:30.928589  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:30.928613  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:30.928624  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:30.928631  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:30.932205  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:30.932830  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:31.428234  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:31.428258  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:31.428267  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:31.428272  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:31.432086  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:31.928358  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:31.928381  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:31.928389  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:31.928393  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:31.932119  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:32.428407  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:32.428431  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:32.428438  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:32.428444  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:32.431769  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:32.928581  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:32.928603  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:32.928613  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:32.928617  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:32.931484  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:33.428480  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:33.428510  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:33.428519  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:33.428524  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:33.432714  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:33.433679  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:33.927920  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:33.927943  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:33.927951  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:33.927956  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:33.931628  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:34.428301  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:34.428324  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:34.428332  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:34.428337  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:34.431417  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:34.928406  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:34.928430  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:34.928438  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:34.928442  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:34.931855  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:35.427982  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:35.428003  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:35.428012  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:35.428016  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:35.431540  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:35.928493  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:35.928519  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:35.928530  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:35.928537  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:35.934468  112472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0804 01:29:35.934963  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:36.428378  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:36.428400  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:36.428408  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:36.428412  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:36.431770  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:36.928354  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:36.928388  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:36.928399  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:36.928407  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:36.931884  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:37.427908  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:37.427933  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:37.427945  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:37.427951  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:37.431474  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:37.928435  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:37.928459  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:37.928466  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:37.928471  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:37.931675  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:38.428714  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:38.428738  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:38.428747  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:38.428752  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:38.432416  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:38.433151  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:38.928608  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:38.928630  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:38.928638  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:38.928642  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:38.932111  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:39.428762  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:39.428786  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:39.428795  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:39.428798  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:39.431928  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:39.928167  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:39.928193  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:39.928204  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:39.928209  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:39.931592  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:40.428226  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:40.428252  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:40.428263  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:40.428268  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:40.432080  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:40.928413  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:40.928444  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:40.928456  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:40.928462  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:40.931754  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:40.932508  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:41.427798  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:41.427820  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:41.427829  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:41.427834  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:41.432113  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:41.927998  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:41.928024  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:41.928035  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:41.928047  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:41.931169  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:42.428720  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:42.428747  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:42.428755  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:42.428759  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:42.432450  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:42.928530  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:42.928555  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:42.928564  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:42.928567  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:42.932425  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:42.933343  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:43.428719  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:43.428743  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.428751  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.428755  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.432609  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.433240  112472 node_ready.go:49] node "ha-998889-m02" has status "Ready":"True"
	I0804 01:29:43.433261  112472 node_ready.go:38] duration metric: took 19.005699575s for node "ha-998889-m02" to be "Ready" ...
	I0804 01:29:43.433270  112472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 01:29:43.433335  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:43.433345  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.433368  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.433378  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.438356  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:43.444542  112472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.444639  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b8ds7
	I0804 01:29:43.444649  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.444656  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.444661  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.448123  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.449177  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.449192  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.449198  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.449204  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.452230  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.453218  112472 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.453235  112472 pod_ready.go:81] duration metric: took 8.66995ms for pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.453243  112472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.453288  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ddb5m
	I0804 01:29:43.453295  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.453301  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.453305  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.456144  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:43.456672  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.456689  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.456696  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.456701  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.460353  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.461231  112472 pod_ready.go:92] pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.461247  112472 pod_ready.go:81] duration metric: took 7.997864ms for pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.461256  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.461302  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889
	I0804 01:29:43.461310  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.461317  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.461321  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.463901  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:43.464364  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.464379  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.464385  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.464388  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.467359  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:43.467784  112472 pod_ready.go:92] pod "etcd-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.467800  112472 pod_ready.go:81] duration metric: took 6.539173ms for pod "etcd-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.467808  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.467853  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889-m02
	I0804 01:29:43.467860  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.467866  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.467871  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.470917  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.471835  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:43.471851  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.471860  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.471865  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.475070  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.475906  112472 pod_ready.go:92] pod "etcd-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.475921  112472 pod_ready.go:81] duration metric: took 8.107274ms for pod "etcd-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.475933  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.629368  112472 request.go:629] Waited for 153.355144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889
	I0804 01:29:43.629458  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889
	I0804 01:29:43.629470  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.629482  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.629489  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.632566  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.828724  112472 request.go:629] Waited for 195.289574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.828815  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.828823  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.828832  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.828838  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.831942  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.832467  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.832489  112472 pod_ready.go:81] duration metric: took 356.548247ms for pod "kube-apiserver-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.832502  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.029644  112472 request.go:629] Waited for 197.067109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m02
	I0804 01:29:44.029742  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m02
	I0804 01:29:44.029749  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.029757  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.029761  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.033449  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:44.228825  112472 request.go:629] Waited for 194.401684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:44.228903  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:44.228916  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.228943  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.228947  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.232325  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:44.232807  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:44.232825  112472 pod_ready.go:81] duration metric: took 400.314893ms for pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.232834  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.428855  112472 request.go:629] Waited for 195.944534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889
	I0804 01:29:44.428939  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889
	I0804 01:29:44.428944  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.428952  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.428956  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.432243  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:44.629375  112472 request.go:629] Waited for 196.420241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:44.629453  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:44.629462  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.629473  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.629479  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.632648  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:44.633393  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:44.633412  112472 pod_ready.go:81] duration metric: took 400.571723ms for pod "kube-controller-manager-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.633423  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.829633  112472 request.go:629] Waited for 196.137466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m02
	I0804 01:29:44.829734  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m02
	I0804 01:29:44.829744  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.829753  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.829760  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.833570  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.029822  112472 request.go:629] Waited for 195.371221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:45.029890  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:45.029897  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.029908  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.029916  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.033380  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.034127  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:45.034152  112472 pod_ready.go:81] duration metric: took 400.722428ms for pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.034166  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-56twz" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.229395  112472 request.go:629] Waited for 195.115343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56twz
	I0804 01:29:45.229470  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56twz
	I0804 01:29:45.229478  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.229490  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.229498  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.232707  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.428822  112472 request.go:629] Waited for 195.313836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:45.428923  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:45.428932  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.428943  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.428949  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.432466  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.433542  112472 pod_ready.go:92] pod "kube-proxy-56twz" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:45.433578  112472 pod_ready.go:81] duration metric: took 399.403294ms for pod "kube-proxy-56twz" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.433590  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v4j77" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.629724  112472 request.go:629] Waited for 196.037328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4j77
	I0804 01:29:45.629829  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4j77
	I0804 01:29:45.629842  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.629855  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.629863  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.633517  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.829732  112472 request.go:629] Waited for 195.399679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:45.829805  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:45.829815  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.829829  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.829840  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.834582  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:45.835514  112472 pod_ready.go:92] pod "kube-proxy-v4j77" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:45.835533  112472 pod_ready.go:81] duration metric: took 401.935529ms for pod "kube-proxy-v4j77" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.835542  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:46.029709  112472 request.go:629] Waited for 194.088454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889
	I0804 01:29:46.029772  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889
	I0804 01:29:46.029777  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.029785  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.029789  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.032566  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:46.229545  112472 request.go:629] Waited for 196.39197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:46.229616  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:46.229623  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.229636  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.229643  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.232829  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:46.233633  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:46.233653  112472 pod_ready.go:81] duration metric: took 398.104737ms for pod "kube-scheduler-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:46.233663  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:46.429795  112472 request.go:629] Waited for 196.040676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m02
	I0804 01:29:46.429857  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m02
	I0804 01:29:46.429863  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.429871  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.429876  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.432532  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:46.629547  112472 request.go:629] Waited for 196.376781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:46.629636  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:46.629644  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.629653  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.629659  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.632739  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:46.633308  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:46.633326  112472 pod_ready.go:81] duration metric: took 399.657247ms for pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:46.633337  112472 pod_ready.go:38] duration metric: took 3.200048772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 01:29:46.633365  112472 api_server.go:52] waiting for apiserver process to appear ...
	I0804 01:29:46.633423  112472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:29:46.648549  112472 api_server.go:72] duration metric: took 22.52698207s to wait for apiserver process to appear ...
	I0804 01:29:46.648583  112472 api_server.go:88] waiting for apiserver healthz status ...
	I0804 01:29:46.648607  112472 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0804 01:29:46.653004  112472 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0804 01:29:46.653079  112472 round_trippers.go:463] GET https://192.168.39.12:8443/version
	I0804 01:29:46.653086  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.653094  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.653103  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.654119  112472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0804 01:29:46.654221  112472 api_server.go:141] control plane version: v1.30.3
	I0804 01:29:46.654238  112472 api_server.go:131] duration metric: took 5.648581ms to wait for apiserver health ...
	I0804 01:29:46.654246  112472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 01:29:46.829640  112472 request.go:629] Waited for 175.323296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:46.829723  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:46.829729  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.829737  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.829741  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.837657  112472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0804 01:29:46.842637  112472 system_pods.go:59] 17 kube-system pods found
	I0804 01:29:46.842669  112472 system_pods.go:61] "coredns-7db6d8ff4d-b8ds7" [b7c997bc-312e-488c-ad30-0647eb5b757e] Running
	I0804 01:29:46.842673  112472 system_pods.go:61] "coredns-7db6d8ff4d-ddb5m" [186999bf-43e4-43e7-a5dc-c84331a2f521] Running
	I0804 01:29:46.842677  112472 system_pods.go:61] "etcd-ha-998889" [82415e8c-a79b-41f3-b6b6-86e1b4e63951] Running
	I0804 01:29:46.842681  112472 system_pods.go:61] "etcd-ha-998889-m02" [0c0646fc-8ef5-47e1-a6c2-59708d88fa7d] Running
	I0804 01:29:46.842684  112472 system_pods.go:61] "kindnet-gc22h" [db5d63c3-4231-45ae-a2e2-b48fbf64be91] Running
	I0804 01:29:46.842688  112472 system_pods.go:61] "kindnet-mm9t2" [46ee5b5b-81d3-4acc-aee0-d57be09c3858] Running
	I0804 01:29:46.842691  112472 system_pods.go:61] "kube-apiserver-ha-998889" [dc07f6be-b73f-44ce-a196-ad51d034ae1d] Running
	I0804 01:29:46.842695  112472 system_pods.go:61] "kube-apiserver-ha-998889-m02" [b462bad7-5f36-491b-a021-de1943fa91ea] Running
	I0804 01:29:46.842699  112472 system_pods.go:61] "kube-controller-manager-ha-998889" [5680756c-077a-4115-abc9-7495c9b5c725] Running
	I0804 01:29:46.842703  112472 system_pods.go:61] "kube-controller-manager-ha-998889-m02" [17fae882-3021-45ef-8e54-70097546e0dc] Running
	I0804 01:29:46.842707  112472 system_pods.go:61] "kube-proxy-56twz" [e9fc726d-cf1c-44a8-839e-84b90f69609f] Running
	I0804 01:29:46.842710  112472 system_pods.go:61] "kube-proxy-v4j77" [87ac4988-17c6-4628-afde-1e1a65c8b66e] Running
	I0804 01:29:46.842714  112472 system_pods.go:61] "kube-scheduler-ha-998889" [2314946f-1cc5-4501-a024-f91be0ef6af9] Running
	I0804 01:29:46.842718  112472 system_pods.go:61] "kube-scheduler-ha-998889-m02" [895df81c-737f-430a-bbd5-9536fde88fa7] Running
	I0804 01:29:46.842721  112472 system_pods.go:61] "kube-vip-ha-998889" [1baf4284-e439-4cfa-b46f-dc618a37580b] Running
	I0804 01:29:46.842725  112472 system_pods.go:61] "kube-vip-ha-998889-m02" [379a3823-ba56-4127-a13b-133808a3c1a3] Running
	I0804 01:29:46.842728  112472 system_pods.go:61] "storage-provisioner" [b2eb4a37-052e-4e8e-9b0d-d58847698eeb] Running
	I0804 01:29:46.842734  112472 system_pods.go:74] duration metric: took 188.48255ms to wait for pod list to return data ...
	I0804 01:29:46.842745  112472 default_sa.go:34] waiting for default service account to be created ...
	I0804 01:29:47.029218  112472 request.go:629] Waited for 186.378146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0804 01:29:47.029298  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0804 01:29:47.029311  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:47.029323  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:47.029333  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:47.033889  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:47.034176  112472 default_sa.go:45] found service account: "default"
	I0804 01:29:47.034201  112472 default_sa.go:55] duration metric: took 191.448723ms for default service account to be created ...
	I0804 01:29:47.034213  112472 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 01:29:47.229666  112472 request.go:629] Waited for 195.365938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:47.229731  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:47.229737  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:47.229744  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:47.229748  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:47.235971  112472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 01:29:47.240311  112472 system_pods.go:86] 17 kube-system pods found
	I0804 01:29:47.240347  112472 system_pods.go:89] "coredns-7db6d8ff4d-b8ds7" [b7c997bc-312e-488c-ad30-0647eb5b757e] Running
	I0804 01:29:47.240353  112472 system_pods.go:89] "coredns-7db6d8ff4d-ddb5m" [186999bf-43e4-43e7-a5dc-c84331a2f521] Running
	I0804 01:29:47.240358  112472 system_pods.go:89] "etcd-ha-998889" [82415e8c-a79b-41f3-b6b6-86e1b4e63951] Running
	I0804 01:29:47.240362  112472 system_pods.go:89] "etcd-ha-998889-m02" [0c0646fc-8ef5-47e1-a6c2-59708d88fa7d] Running
	I0804 01:29:47.240366  112472 system_pods.go:89] "kindnet-gc22h" [db5d63c3-4231-45ae-a2e2-b48fbf64be91] Running
	I0804 01:29:47.240371  112472 system_pods.go:89] "kindnet-mm9t2" [46ee5b5b-81d3-4acc-aee0-d57be09c3858] Running
	I0804 01:29:47.240375  112472 system_pods.go:89] "kube-apiserver-ha-998889" [dc07f6be-b73f-44ce-a196-ad51d034ae1d] Running
	I0804 01:29:47.240382  112472 system_pods.go:89] "kube-apiserver-ha-998889-m02" [b462bad7-5f36-491b-a021-de1943fa91ea] Running
	I0804 01:29:47.240386  112472 system_pods.go:89] "kube-controller-manager-ha-998889" [5680756c-077a-4115-abc9-7495c9b5c725] Running
	I0804 01:29:47.240391  112472 system_pods.go:89] "kube-controller-manager-ha-998889-m02" [17fae882-3021-45ef-8e54-70097546e0dc] Running
	I0804 01:29:47.240395  112472 system_pods.go:89] "kube-proxy-56twz" [e9fc726d-cf1c-44a8-839e-84b90f69609f] Running
	I0804 01:29:47.240400  112472 system_pods.go:89] "kube-proxy-v4j77" [87ac4988-17c6-4628-afde-1e1a65c8b66e] Running
	I0804 01:29:47.240404  112472 system_pods.go:89] "kube-scheduler-ha-998889" [2314946f-1cc5-4501-a024-f91be0ef6af9] Running
	I0804 01:29:47.240410  112472 system_pods.go:89] "kube-scheduler-ha-998889-m02" [895df81c-737f-430a-bbd5-9536fde88fa7] Running
	I0804 01:29:47.240414  112472 system_pods.go:89] "kube-vip-ha-998889" [1baf4284-e439-4cfa-b46f-dc618a37580b] Running
	I0804 01:29:47.240417  112472 system_pods.go:89] "kube-vip-ha-998889-m02" [379a3823-ba56-4127-a13b-133808a3c1a3] Running
	I0804 01:29:47.240421  112472 system_pods.go:89] "storage-provisioner" [b2eb4a37-052e-4e8e-9b0d-d58847698eeb] Running
	I0804 01:29:47.240432  112472 system_pods.go:126] duration metric: took 206.208464ms to wait for k8s-apps to be running ...
	I0804 01:29:47.240441  112472 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 01:29:47.240489  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:29:47.255788  112472 system_svc.go:56] duration metric: took 15.334437ms WaitForService to wait for kubelet
	I0804 01:29:47.255822  112472 kubeadm.go:582] duration metric: took 23.134258105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 01:29:47.255849  112472 node_conditions.go:102] verifying NodePressure condition ...
	I0804 01:29:47.429326  112472 request.go:629] Waited for 173.355911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes
	I0804 01:29:47.429408  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes
	I0804 01:29:47.429419  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:47.429428  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:47.429436  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:47.432960  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:47.433843  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:29:47.433873  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:29:47.433889  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:29:47.433895  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:29:47.433915  112472 node_conditions.go:105] duration metric: took 178.056963ms to run NodePressure ...
	I0804 01:29:47.433931  112472 start.go:241] waiting for startup goroutines ...
	I0804 01:29:47.433968  112472 start.go:255] writing updated cluster config ...
	I0804 01:29:47.435993  112472 out.go:177] 
	I0804 01:29:47.437444  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:29:47.437531  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:29:47.439114  112472 out.go:177] * Starting "ha-998889-m03" control-plane node in "ha-998889" cluster
	I0804 01:29:47.440148  112472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:29:47.440173  112472 cache.go:56] Caching tarball of preloaded images
	I0804 01:29:47.440273  112472 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 01:29:47.440285  112472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 01:29:47.440381  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:29:47.440559  112472 start.go:360] acquireMachinesLock for ha-998889-m03: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 01:29:47.440609  112472 start.go:364] duration metric: took 30.779µs to acquireMachinesLock for "ha-998889-m03"
	I0804 01:29:47.440631  112472 start.go:93] Provisioning new machine with config: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:29:47.440776  112472 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0804 01:29:47.442174  112472 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 01:29:47.442338  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:29:47.442388  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:29:47.457540  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I0804 01:29:47.458045  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:29:47.458603  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:29:47.458628  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:29:47.459051  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:29:47.459247  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetMachineName
	I0804 01:29:47.459429  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:29:47.459594  112472 start.go:159] libmachine.API.Create for "ha-998889" (driver="kvm2")
	I0804 01:29:47.459621  112472 client.go:168] LocalClient.Create starting
	I0804 01:29:47.459659  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem
	I0804 01:29:47.459698  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:29:47.459714  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:29:47.459785  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem
	I0804 01:29:47.459811  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:29:47.459828  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:29:47.459852  112472 main.go:141] libmachine: Running pre-create checks...
	I0804 01:29:47.459863  112472 main.go:141] libmachine: (ha-998889-m03) Calling .PreCreateCheck
	I0804 01:29:47.460095  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetConfigRaw
	I0804 01:29:47.460490  112472 main.go:141] libmachine: Creating machine...
	I0804 01:29:47.460504  112472 main.go:141] libmachine: (ha-998889-m03) Calling .Create
	I0804 01:29:47.460659  112472 main.go:141] libmachine: (ha-998889-m03) Creating KVM machine...
	I0804 01:29:47.461802  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found existing default KVM network
	I0804 01:29:47.462068  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found existing private KVM network mk-ha-998889
	I0804 01:29:47.462227  112472 main.go:141] libmachine: (ha-998889-m03) Setting up store path in /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03 ...
	I0804 01:29:47.462258  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:47.462152  113280 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:29:47.462275  112472 main.go:141] libmachine: (ha-998889-m03) Building disk image from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 01:29:47.462347  112472 main.go:141] libmachine: (ha-998889-m03) Downloading /home/jenkins/minikube-integration/19364-90243/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 01:29:47.712187  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:47.712061  113280 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa...
	I0804 01:29:47.800440  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:47.800294  113280 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/ha-998889-m03.rawdisk...
	I0804 01:29:47.800486  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Writing magic tar header
	I0804 01:29:47.800502  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Writing SSH key tar header
	I0804 01:29:47.800513  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:47.800452  113280 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03 ...
	I0804 01:29:47.800635  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03
	I0804 01:29:47.800661  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03 (perms=drwx------)
	I0804 01:29:47.800669  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines
	I0804 01:29:47.800679  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines (perms=drwxr-xr-x)
	I0804 01:29:47.800687  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube (perms=drwxr-xr-x)
	I0804 01:29:47.800696  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:29:47.800705  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243
	I0804 01:29:47.800717  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 01:29:47.800726  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins
	I0804 01:29:47.800737  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home
	I0804 01:29:47.800747  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243 (perms=drwxrwxr-x)
	I0804 01:29:47.800762  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Skipping /home - not owner
	I0804 01:29:47.800773  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 01:29:47.800786  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 01:29:47.800797  112472 main.go:141] libmachine: (ha-998889-m03) Creating domain...
	I0804 01:29:47.801883  112472 main.go:141] libmachine: (ha-998889-m03) define libvirt domain using xml: 
	I0804 01:29:47.801914  112472 main.go:141] libmachine: (ha-998889-m03) <domain type='kvm'>
	I0804 01:29:47.801936  112472 main.go:141] libmachine: (ha-998889-m03)   <name>ha-998889-m03</name>
	I0804 01:29:47.801951  112472 main.go:141] libmachine: (ha-998889-m03)   <memory unit='MiB'>2200</memory>
	I0804 01:29:47.801961  112472 main.go:141] libmachine: (ha-998889-m03)   <vcpu>2</vcpu>
	I0804 01:29:47.801970  112472 main.go:141] libmachine: (ha-998889-m03)   <features>
	I0804 01:29:47.801988  112472 main.go:141] libmachine: (ha-998889-m03)     <acpi/>
	I0804 01:29:47.801996  112472 main.go:141] libmachine: (ha-998889-m03)     <apic/>
	I0804 01:29:47.802003  112472 main.go:141] libmachine: (ha-998889-m03)     <pae/>
	I0804 01:29:47.802011  112472 main.go:141] libmachine: (ha-998889-m03)     
	I0804 01:29:47.802017  112472 main.go:141] libmachine: (ha-998889-m03)   </features>
	I0804 01:29:47.802025  112472 main.go:141] libmachine: (ha-998889-m03)   <cpu mode='host-passthrough'>
	I0804 01:29:47.802030  112472 main.go:141] libmachine: (ha-998889-m03)   
	I0804 01:29:47.802035  112472 main.go:141] libmachine: (ha-998889-m03)   </cpu>
	I0804 01:29:47.802043  112472 main.go:141] libmachine: (ha-998889-m03)   <os>
	I0804 01:29:47.802049  112472 main.go:141] libmachine: (ha-998889-m03)     <type>hvm</type>
	I0804 01:29:47.802084  112472 main.go:141] libmachine: (ha-998889-m03)     <boot dev='cdrom'/>
	I0804 01:29:47.802116  112472 main.go:141] libmachine: (ha-998889-m03)     <boot dev='hd'/>
	I0804 01:29:47.802127  112472 main.go:141] libmachine: (ha-998889-m03)     <bootmenu enable='no'/>
	I0804 01:29:47.802138  112472 main.go:141] libmachine: (ha-998889-m03)   </os>
	I0804 01:29:47.802146  112472 main.go:141] libmachine: (ha-998889-m03)   <devices>
	I0804 01:29:47.802152  112472 main.go:141] libmachine: (ha-998889-m03)     <disk type='file' device='cdrom'>
	I0804 01:29:47.802163  112472 main.go:141] libmachine: (ha-998889-m03)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/boot2docker.iso'/>
	I0804 01:29:47.802170  112472 main.go:141] libmachine: (ha-998889-m03)       <target dev='hdc' bus='scsi'/>
	I0804 01:29:47.802176  112472 main.go:141] libmachine: (ha-998889-m03)       <readonly/>
	I0804 01:29:47.802182  112472 main.go:141] libmachine: (ha-998889-m03)     </disk>
	I0804 01:29:47.802189  112472 main.go:141] libmachine: (ha-998889-m03)     <disk type='file' device='disk'>
	I0804 01:29:47.802197  112472 main.go:141] libmachine: (ha-998889-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 01:29:47.802205  112472 main.go:141] libmachine: (ha-998889-m03)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/ha-998889-m03.rawdisk'/>
	I0804 01:29:47.802215  112472 main.go:141] libmachine: (ha-998889-m03)       <target dev='hda' bus='virtio'/>
	I0804 01:29:47.802222  112472 main.go:141] libmachine: (ha-998889-m03)     </disk>
	I0804 01:29:47.802241  112472 main.go:141] libmachine: (ha-998889-m03)     <interface type='network'>
	I0804 01:29:47.802253  112472 main.go:141] libmachine: (ha-998889-m03)       <source network='mk-ha-998889'/>
	I0804 01:29:47.802264  112472 main.go:141] libmachine: (ha-998889-m03)       <model type='virtio'/>
	I0804 01:29:47.802272  112472 main.go:141] libmachine: (ha-998889-m03)     </interface>
	I0804 01:29:47.802282  112472 main.go:141] libmachine: (ha-998889-m03)     <interface type='network'>
	I0804 01:29:47.802291  112472 main.go:141] libmachine: (ha-998889-m03)       <source network='default'/>
	I0804 01:29:47.802302  112472 main.go:141] libmachine: (ha-998889-m03)       <model type='virtio'/>
	I0804 01:29:47.802313  112472 main.go:141] libmachine: (ha-998889-m03)     </interface>
	I0804 01:29:47.802324  112472 main.go:141] libmachine: (ha-998889-m03)     <serial type='pty'>
	I0804 01:29:47.802332  112472 main.go:141] libmachine: (ha-998889-m03)       <target port='0'/>
	I0804 01:29:47.802342  112472 main.go:141] libmachine: (ha-998889-m03)     </serial>
	I0804 01:29:47.802350  112472 main.go:141] libmachine: (ha-998889-m03)     <console type='pty'>
	I0804 01:29:47.802361  112472 main.go:141] libmachine: (ha-998889-m03)       <target type='serial' port='0'/>
	I0804 01:29:47.802369  112472 main.go:141] libmachine: (ha-998889-m03)     </console>
	I0804 01:29:47.802378  112472 main.go:141] libmachine: (ha-998889-m03)     <rng model='virtio'>
	I0804 01:29:47.802388  112472 main.go:141] libmachine: (ha-998889-m03)       <backend model='random'>/dev/random</backend>
	I0804 01:29:47.802398  112472 main.go:141] libmachine: (ha-998889-m03)     </rng>
	I0804 01:29:47.802406  112472 main.go:141] libmachine: (ha-998889-m03)     
	I0804 01:29:47.802415  112472 main.go:141] libmachine: (ha-998889-m03)     
	I0804 01:29:47.802437  112472 main.go:141] libmachine: (ha-998889-m03)   </devices>
	I0804 01:29:47.802456  112472 main.go:141] libmachine: (ha-998889-m03) </domain>
	I0804 01:29:47.802467  112472 main.go:141] libmachine: (ha-998889-m03) 
	I0804 01:29:47.809409  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:2f:96:e2 in network default
	I0804 01:29:47.809984  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:47.810009  112472 main.go:141] libmachine: (ha-998889-m03) Ensuring networks are active...
	I0804 01:29:47.810807  112472 main.go:141] libmachine: (ha-998889-m03) Ensuring network default is active
	I0804 01:29:47.811254  112472 main.go:141] libmachine: (ha-998889-m03) Ensuring network mk-ha-998889 is active
	I0804 01:29:47.811705  112472 main.go:141] libmachine: (ha-998889-m03) Getting domain xml...
	I0804 01:29:47.812654  112472 main.go:141] libmachine: (ha-998889-m03) Creating domain...
	I0804 01:29:49.074803  112472 main.go:141] libmachine: (ha-998889-m03) Waiting to get IP...
	I0804 01:29:49.075504  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:49.075918  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:49.075968  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:49.075908  113280 retry.go:31] will retry after 189.457657ms: waiting for machine to come up
	I0804 01:29:49.267413  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:49.268028  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:49.268065  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:49.267992  113280 retry.go:31] will retry after 365.715137ms: waiting for machine to come up
	I0804 01:29:49.635599  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:49.636060  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:49.636084  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:49.636007  113280 retry.go:31] will retry after 320.225156ms: waiting for machine to come up
	I0804 01:29:49.957564  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:49.958013  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:49.958080  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:49.957983  113280 retry.go:31] will retry after 606.874403ms: waiting for machine to come up
	I0804 01:29:50.566914  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:50.567429  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:50.567459  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:50.567385  113280 retry.go:31] will retry after 709.427152ms: waiting for machine to come up
	I0804 01:29:51.278500  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:51.278940  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:51.279012  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:51.278925  113280 retry.go:31] will retry after 739.069612ms: waiting for machine to come up
	I0804 01:29:52.019405  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:52.020063  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:52.020098  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:52.019997  113280 retry.go:31] will retry after 746.991915ms: waiting for machine to come up
	I0804 01:29:52.768394  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:52.768717  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:52.768746  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:52.768665  113280 retry.go:31] will retry after 1.374146128s: waiting for machine to come up
	I0804 01:29:54.145379  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:54.145892  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:54.145916  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:54.145852  113280 retry.go:31] will retry after 1.561798019s: waiting for machine to come up
	I0804 01:29:55.709100  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:55.709511  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:55.709544  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:55.709458  113280 retry.go:31] will retry after 2.192385477s: waiting for machine to come up
	I0804 01:29:57.903276  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:57.903806  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:57.903838  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:57.903750  113280 retry.go:31] will retry after 1.945348735s: waiting for machine to come up
	I0804 01:29:59.851064  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:59.851484  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:59.851510  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:59.851452  113280 retry.go:31] will retry after 2.313076479s: waiting for machine to come up
	I0804 01:30:02.166675  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:02.167233  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:30:02.167259  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:30:02.167193  113280 retry.go:31] will retry after 3.956837801s: waiting for machine to come up
	I0804 01:30:06.128554  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:06.128904  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:30:06.128930  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:30:06.128865  113280 retry.go:31] will retry after 3.689366809s: waiting for machine to come up
	I0804 01:30:09.820728  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:09.821213  112472 main.go:141] libmachine: (ha-998889-m03) Found IP for machine: 192.168.39.148
	I0804 01:30:09.821245  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has current primary IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:09.821259  112472 main.go:141] libmachine: (ha-998889-m03) Reserving static IP address...
	I0804 01:30:09.821655  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find host DHCP lease matching {name: "ha-998889-m03", mac: "52:54:00:65:ff:5a", ip: "192.168.39.148"} in network mk-ha-998889
	I0804 01:30:09.897493  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Getting to WaitForSSH function...
	I0804 01:30:09.897526  112472 main.go:141] libmachine: (ha-998889-m03) Reserved static IP address: 192.168.39.148
	I0804 01:30:09.897541  112472 main.go:141] libmachine: (ha-998889-m03) Waiting for SSH to be available...
	I0804 01:30:09.900192  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:09.900585  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889
	I0804 01:30:09.900610  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find defined IP address of network mk-ha-998889 interface with MAC address 52:54:00:65:ff:5a
	I0804 01:30:09.900815  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using SSH client type: external
	I0804 01:30:09.900845  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa (-rw-------)
	I0804 01:30:09.900873  112472 main.go:141] libmachine: (ha-998889-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 01:30:09.900893  112472 main.go:141] libmachine: (ha-998889-m03) DBG | About to run SSH command:
	I0804 01:30:09.900906  112472 main.go:141] libmachine: (ha-998889-m03) DBG | exit 0
	I0804 01:30:09.904610  112472 main.go:141] libmachine: (ha-998889-m03) DBG | SSH cmd err, output: exit status 255: 
	I0804 01:30:09.904636  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0804 01:30:09.904647  112472 main.go:141] libmachine: (ha-998889-m03) DBG | command : exit 0
	I0804 01:30:09.904658  112472 main.go:141] libmachine: (ha-998889-m03) DBG | err     : exit status 255
	I0804 01:30:09.904677  112472 main.go:141] libmachine: (ha-998889-m03) DBG | output  : 
	I0804 01:30:12.905342  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Getting to WaitForSSH function...
	I0804 01:30:12.907647  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:12.907990  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:12.908008  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:12.908131  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using SSH client type: external
	I0804 01:30:12.908148  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa (-rw-------)
	I0804 01:30:12.908176  112472 main.go:141] libmachine: (ha-998889-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 01:30:12.908195  112472 main.go:141] libmachine: (ha-998889-m03) DBG | About to run SSH command:
	I0804 01:30:12.908209  112472 main.go:141] libmachine: (ha-998889-m03) DBG | exit 0
	I0804 01:30:13.037574  112472 main.go:141] libmachine: (ha-998889-m03) DBG | SSH cmd err, output: <nil>: 
	I0804 01:30:13.037860  112472 main.go:141] libmachine: (ha-998889-m03) KVM machine creation complete!
	I0804 01:30:13.038261  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetConfigRaw
	I0804 01:30:13.038837  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:13.039026  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:13.039157  112472 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 01:30:13.039173  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:30:13.041139  112472 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 01:30:13.041158  112472 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 01:30:13.041167  112472 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 01:30:13.041173  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.043399  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.043769  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.043798  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.043969  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.044180  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.044362  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.044571  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.044785  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.045039  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.045051  112472 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 01:30:13.161179  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:30:13.161204  112472 main.go:141] libmachine: Detecting the provisioner...
	I0804 01:30:13.161212  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.164284  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.164748  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.164781  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.164997  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.165223  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.165409  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.165554  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.165743  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.165930  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.165943  112472 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 01:30:13.282476  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 01:30:13.282563  112472 main.go:141] libmachine: found compatible host: buildroot
	I0804 01:30:13.282577  112472 main.go:141] libmachine: Provisioning with buildroot...
	I0804 01:30:13.282591  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetMachineName
	I0804 01:30:13.282885  112472 buildroot.go:166] provisioning hostname "ha-998889-m03"
	I0804 01:30:13.282913  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetMachineName
	I0804 01:30:13.283161  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.286094  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.286506  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.286527  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.286720  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.286918  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.287105  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.287259  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.287465  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.287685  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.287698  112472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-998889-m03 && echo "ha-998889-m03" | sudo tee /etc/hostname
	I0804 01:30:13.420354  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889-m03
	
	I0804 01:30:13.420386  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.422993  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.423428  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.423458  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.423605  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.423805  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.424021  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.424184  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.424342  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.424516  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.424536  112472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-998889-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-998889-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-998889-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 01:30:13.551466  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:30:13.551509  112472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 01:30:13.551533  112472 buildroot.go:174] setting up certificates
	I0804 01:30:13.551547  112472 provision.go:84] configureAuth start
	I0804 01:30:13.551561  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetMachineName
	I0804 01:30:13.551907  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:30:13.554723  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.555212  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.555243  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.555328  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.557675  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.558008  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.558035  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.558148  112472 provision.go:143] copyHostCerts
	I0804 01:30:13.558196  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:30:13.558251  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 01:30:13.558263  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:30:13.558364  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 01:30:13.558476  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:30:13.558516  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 01:30:13.558528  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:30:13.558586  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 01:30:13.558661  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:30:13.558683  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 01:30:13.558691  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:30:13.558717  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 01:30:13.558784  112472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.ha-998889-m03 san=[127.0.0.1 192.168.39.148 ha-998889-m03 localhost minikube]
	I0804 01:30:13.664412  112472 provision.go:177] copyRemoteCerts
	I0804 01:30:13.664474  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 01:30:13.664499  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.667368  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.667684  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.667720  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.667868  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.668059  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.668204  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.668368  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:30:13.761411  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 01:30:13.761490  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 01:30:13.793581  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 01:30:13.793658  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 01:30:13.822382  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 01:30:13.822468  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 01:30:13.848437  112472 provision.go:87] duration metric: took 296.872735ms to configureAuth
	I0804 01:30:13.848468  112472 buildroot.go:189] setting minikube options for container-runtime
	I0804 01:30:13.848804  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:30:13.848905  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.852406  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.852767  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.852846  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.852975  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.853168  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.853332  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.853493  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.853655  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.853815  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.853829  112472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 01:30:14.128268  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 01:30:14.128296  112472 main.go:141] libmachine: Checking connection to Docker...
	I0804 01:30:14.128305  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetURL
	I0804 01:30:14.129674  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using libvirt version 6000000
	I0804 01:30:14.132270  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.132741  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.132783  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.132998  112472 main.go:141] libmachine: Docker is up and running!
	I0804 01:30:14.133017  112472 main.go:141] libmachine: Reticulating splines...
	I0804 01:30:14.133027  112472 client.go:171] duration metric: took 26.673394167s to LocalClient.Create
	I0804 01:30:14.133074  112472 start.go:167] duration metric: took 26.67346353s to libmachine.API.Create "ha-998889"
	I0804 01:30:14.133088  112472 start.go:293] postStartSetup for "ha-998889-m03" (driver="kvm2")
	I0804 01:30:14.133121  112472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 01:30:14.133150  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.133443  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 01:30:14.133476  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:14.135882  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.136213  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.136249  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.136431  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:14.136623  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.136756  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:14.136933  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:30:14.224635  112472 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 01:30:14.229334  112472 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 01:30:14.229381  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 01:30:14.229455  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 01:30:14.229530  112472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 01:30:14.229541  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 01:30:14.229636  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 01:30:14.239822  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:30:14.268485  112472 start.go:296] duration metric: took 135.379938ms for postStartSetup
	I0804 01:30:14.268543  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetConfigRaw
	I0804 01:30:14.269200  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:30:14.271918  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.272262  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.272292  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.272695  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:30:14.272949  112472 start.go:128] duration metric: took 26.832159097s to createHost
	I0804 01:30:14.272979  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:14.275655  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.276002  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.276026  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.276211  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:14.276420  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.276595  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.276777  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:14.276968  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:14.277160  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:14.277174  112472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 01:30:14.394389  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722735014.374072982
	
	I0804 01:30:14.394416  112472 fix.go:216] guest clock: 1722735014.374072982
	I0804 01:30:14.394426  112472 fix.go:229] Guest: 2024-08-04 01:30:14.374072982 +0000 UTC Remote: 2024-08-04 01:30:14.272965577 +0000 UTC m=+160.274827793 (delta=101.107405ms)
	I0804 01:30:14.394448  112472 fix.go:200] guest clock delta is within tolerance: 101.107405ms
	I0804 01:30:14.394455  112472 start.go:83] releasing machines lock for "ha-998889-m03", held for 26.953834041s
	I0804 01:30:14.394480  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.394787  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:30:14.397280  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.397679  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.397707  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.399948  112472 out.go:177] * Found network options:
	I0804 01:30:14.401274  112472 out.go:177]   - NO_PROXY=192.168.39.12,192.168.39.200
	W0804 01:30:14.402466  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	W0804 01:30:14.402488  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 01:30:14.402506  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.403106  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.403327  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.403436  112472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 01:30:14.403482  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	W0804 01:30:14.403591  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	W0804 01:30:14.403610  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 01:30:14.403668  112472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 01:30:14.403686  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:14.406583  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.406912  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.407309  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.407336  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.407343  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:14.407462  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.407483  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.407535  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.407638  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:14.407751  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:14.407872  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.407967  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:30:14.408034  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:14.408175  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:30:14.661633  112472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 01:30:14.668852  112472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 01:30:14.668933  112472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 01:30:14.686275  112472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 01:30:14.686304  112472 start.go:495] detecting cgroup driver to use...
	I0804 01:30:14.686386  112472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 01:30:14.707419  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 01:30:14.721366  112472 docker.go:217] disabling cri-docker service (if available) ...
	I0804 01:30:14.721433  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 01:30:14.736634  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 01:30:14.752510  112472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 01:30:14.871429  112472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 01:30:15.053551  112472 docker.go:233] disabling docker service ...
	I0804 01:30:15.053634  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 01:30:15.068636  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 01:30:15.082000  112472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 01:30:15.199277  112472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 01:30:15.319789  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 01:30:15.335346  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 01:30:15.356824  112472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 01:30:15.356888  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.370341  112472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 01:30:15.370413  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.385555  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.396720  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.408113  112472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 01:30:15.419473  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.430763  112472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.450864  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.462623  112472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 01:30:15.472861  112472 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 01:30:15.472956  112472 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 01:30:15.486904  112472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 01:30:15.496524  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:30:15.619668  112472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 01:30:15.764119  112472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 01:30:15.764213  112472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 01:30:15.769427  112472 start.go:563] Will wait 60s for crictl version
	I0804 01:30:15.769500  112472 ssh_runner.go:195] Run: which crictl
	I0804 01:30:15.773524  112472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 01:30:15.810930  112472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 01:30:15.811011  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:30:15.840357  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:30:15.871423  112472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 01:30:15.872754  112472 out.go:177]   - env NO_PROXY=192.168.39.12
	I0804 01:30:15.874141  112472 out.go:177]   - env NO_PROXY=192.168.39.12,192.168.39.200
	I0804 01:30:15.875479  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:30:15.878552  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:15.879139  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:15.879165  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:15.879390  112472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 01:30:15.883860  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:30:15.896251  112472 mustload.go:65] Loading cluster: ha-998889
	I0804 01:30:15.896487  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:30:15.896754  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:30:15.896802  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:30:15.912025  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0804 01:30:15.912523  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:30:15.913190  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:30:15.913213  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:30:15.913546  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:30:15.913770  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:30:15.915381  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:30:15.915679  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:30:15.915722  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:30:15.930291  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0804 01:30:15.930709  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:30:15.931148  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:30:15.931169  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:30:15.931534  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:30:15.931749  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:30:15.931981  112472 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889 for IP: 192.168.39.148
	I0804 01:30:15.931994  112472 certs.go:194] generating shared ca certs ...
	I0804 01:30:15.932028  112472 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:30:15.932178  112472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 01:30:15.932241  112472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 01:30:15.932256  112472 certs.go:256] generating profile certs ...
	I0804 01:30:15.932358  112472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key
	I0804 01:30:15.932391  112472 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cc28b01d
	I0804 01:30:15.932413  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cc28b01d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.200 192.168.39.148 192.168.39.254]
	I0804 01:30:16.080096  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cc28b01d ...
	I0804 01:30:16.080131  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cc28b01d: {Name:mkc85edb2ed057b5fb989579a363ce447c718130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:30:16.080309  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cc28b01d ...
	I0804 01:30:16.080321  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cc28b01d: {Name:mkc7544167880e60634768ff5b37bb0473e49d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:30:16.080388  112472 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cc28b01d -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt
	I0804 01:30:16.080524  112472 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cc28b01d -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key
	I0804 01:30:16.080682  112472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key
	I0804 01:30:16.080699  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 01:30:16.080712  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 01:30:16.080725  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 01:30:16.080738  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 01:30:16.080753  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 01:30:16.080766  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 01:30:16.080778  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 01:30:16.080793  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 01:30:16.080853  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 01:30:16.080895  112472 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 01:30:16.080908  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 01:30:16.080937  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 01:30:16.080968  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 01:30:16.081005  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 01:30:16.081066  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:30:16.081099  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:30:16.081113  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 01:30:16.081126  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 01:30:16.081163  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:30:16.084207  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:30:16.084548  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:30:16.084579  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:30:16.084763  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:30:16.085030  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:30:16.085183  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:30:16.085343  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:30:16.161830  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0804 01:30:16.168321  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0804 01:30:16.181751  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0804 01:30:16.186828  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0804 01:30:16.197820  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0804 01:30:16.202400  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0804 01:30:16.214244  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0804 01:30:16.218690  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0804 01:30:16.229869  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0804 01:30:16.234497  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0804 01:30:16.246798  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0804 01:30:16.251327  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0804 01:30:16.263416  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 01:30:16.291661  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 01:30:16.318484  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 01:30:16.346074  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 01:30:16.373940  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0804 01:30:16.398867  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 01:30:16.426843  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 01:30:16.454738  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 01:30:16.481058  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 01:30:16.505569  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 01:30:16.530499  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 01:30:16.556438  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0804 01:30:16.574307  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0804 01:30:16.593420  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0804 01:30:16.611302  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0804 01:30:16.631728  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0804 01:30:16.650414  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0804 01:30:16.671020  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0804 01:30:16.689275  112472 ssh_runner.go:195] Run: openssl version
	I0804 01:30:16.695184  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 01:30:16.706723  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 01:30:16.711610  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 01:30:16.711674  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 01:30:16.717526  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 01:30:16.728558  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 01:30:16.739903  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:30:16.744796  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:30:16.744862  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:30:16.750729  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 01:30:16.763427  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 01:30:16.776126  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 01:30:16.781382  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 01:30:16.781459  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 01:30:16.787358  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 01:30:16.801441  112472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 01:30:16.806107  112472 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 01:30:16.806180  112472 kubeadm.go:934] updating node {m03 192.168.39.148 8443 v1.30.3 crio true true} ...
	I0804 01:30:16.806283  112472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-998889-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 01:30:16.806319  112472 kube-vip.go:115] generating kube-vip config ...
	I0804 01:30:16.806365  112472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 01:30:16.825844  112472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 01:30:16.825921  112472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0804 01:30:16.826004  112472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 01:30:16.836795  112472 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0804 01:30:16.836887  112472 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0804 01:30:16.847722  112472 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0804 01:30:16.847754  112472 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0804 01:30:16.847775  112472 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0804 01:30:16.847782  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:30:16.847786  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 01:30:16.847792  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 01:30:16.847871  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 01:30:16.847873  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 01:30:16.866525  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 01:30:16.866681  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0804 01:30:16.866720  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0804 01:30:16.866750  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 01:30:16.866635  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0804 01:30:16.866789  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0804 01:30:16.899709  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0804 01:30:16.899754  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0804 01:30:17.867269  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0804 01:30:17.878945  112472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0804 01:30:17.898032  112472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 01:30:17.916986  112472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0804 01:30:17.936555  112472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 01:30:17.941044  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:30:17.955915  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:30:18.092240  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:30:18.110849  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:30:18.111231  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:30:18.111280  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:30:18.126563  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0804 01:30:18.127163  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:30:18.127798  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:30:18.127825  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:30:18.128255  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:30:18.128471  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:30:18.128674  112472 start.go:317] joinCluster: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:30:18.128823  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0804 01:30:18.128844  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:30:18.132258  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:30:18.132695  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:30:18.132732  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:30:18.132913  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:30:18.133115  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:30:18.133281  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:30:18.133447  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:30:18.386102  112472 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:30:18.386168  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token woe9t9.vi1uxuwpaas0hcwg --discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-998889-m03 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443"
	I0804 01:30:41.003939  112472 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token woe9t9.vi1uxuwpaas0hcwg --discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-998889-m03 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443": (22.617738978s)
	I0804 01:30:41.003983  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0804 01:30:41.698461  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-998889-m03 minikube.k8s.io/updated_at=2024_08_04T01_30_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-998889 minikube.k8s.io/primary=false
	I0804 01:30:41.843170  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-998889-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0804 01:30:41.972331  112472 start.go:319] duration metric: took 23.843650014s to joinCluster
	I0804 01:30:41.972451  112472 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:30:41.972822  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:30:41.973997  112472 out.go:177] * Verifying Kubernetes components...
	I0804 01:30:41.975277  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:30:42.275005  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:30:42.307156  112472 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:30:42.307609  112472 kapi.go:59] client config for ha-998889: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key", CAFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0804 01:30:42.307713  112472 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.12:8443
	I0804 01:30:42.308051  112472 node_ready.go:35] waiting up to 6m0s for node "ha-998889-m03" to be "Ready" ...
	I0804 01:30:42.308170  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:42.308185  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:42.308196  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:42.308204  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:42.311410  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:42.808869  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:42.808900  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:42.808918  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:42.808923  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:42.812787  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:43.308996  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:43.309046  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:43.309060  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:43.309065  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:43.312826  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:43.808495  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:43.808522  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:43.808532  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:43.808538  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:43.812765  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:44.308478  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:44.308509  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:44.308519  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:44.308524  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:44.313466  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:44.314164  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:44.808368  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:44.808393  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:44.808404  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:44.808410  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:44.811824  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:45.308699  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:45.308720  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:45.308730  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:45.308738  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:45.312130  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:45.808965  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:45.808988  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:45.808996  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:45.809000  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:45.812789  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:46.308583  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:46.308613  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:46.308626  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:46.308634  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:46.312136  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:46.809377  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:46.809413  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:46.809426  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:46.809430  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:46.812751  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:46.813645  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:47.309143  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:47.309182  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:47.309193  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:47.309198  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:47.314050  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:47.808301  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:47.808328  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:47.808338  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:47.808342  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:47.812309  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:48.308363  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:48.308390  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:48.308400  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:48.308406  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:48.312924  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:48.809066  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:48.809099  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:48.809109  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:48.809114  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:48.812530  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:49.308430  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:49.308453  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:49.308462  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:49.308468  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:49.312205  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:49.313218  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:49.808688  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:49.808716  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:49.808724  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:49.808729  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:49.812289  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:50.309123  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:50.309150  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:50.309164  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:50.309168  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:50.312828  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:50.809047  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:50.809074  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:50.809085  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:50.809091  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:50.812368  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:51.309245  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:51.309274  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:51.309285  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:51.309291  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:51.313490  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:51.314034  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:51.808304  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:51.808329  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:51.808348  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:51.808352  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:51.811637  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:52.309113  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:52.309140  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:52.309147  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:52.309151  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:52.312552  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:52.808933  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:52.808958  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:52.808966  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:52.808972  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:52.813010  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:53.308307  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:53.308333  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:53.308342  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:53.308347  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:53.312252  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:53.808884  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:53.808908  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:53.808917  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:53.808921  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:53.812577  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:53.815786  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:54.308578  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:54.308603  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:54.308611  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:54.308616  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:54.311890  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:54.808843  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:54.808874  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:54.808886  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:54.808892  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:54.812280  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:55.308795  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:55.308821  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:55.308833  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:55.308840  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:55.312214  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:55.809063  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:55.809088  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:55.809098  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:55.809102  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:55.813432  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:56.308394  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:56.308419  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:56.308428  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:56.308431  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:56.311872  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:56.312591  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:56.808943  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:56.808967  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:56.808976  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:56.808980  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:56.812629  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:57.308638  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:57.308662  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:57.308674  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:57.308680  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:57.312518  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:57.809285  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:57.809310  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:57.809318  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:57.809322  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:57.812874  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:58.309200  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:58.309224  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:58.309233  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:58.309236  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:58.313089  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:58.313678  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:58.809108  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:58.809132  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:58.809141  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:58.809146  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:58.813028  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:59.309031  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:59.309056  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.309065  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.309068  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.312234  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:59.808448  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:59.808472  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.808483  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.808488  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.826718  112472 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0804 01:30:59.828110  112472 node_ready.go:49] node "ha-998889-m03" has status "Ready":"True"
	I0804 01:30:59.828143  112472 node_ready.go:38] duration metric: took 17.520049448s for node "ha-998889-m03" to be "Ready" ...
	I0804 01:30:59.828156  112472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 01:30:59.828245  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:30:59.828259  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.828270  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.828275  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.838580  112472 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0804 01:30:59.845272  112472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.845380  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b8ds7
	I0804 01:30:59.845391  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.845401  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.845407  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.849095  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:59.849914  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:30:59.849928  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.849936  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.849941  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.852403  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.852929  112472 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace has status "Ready":"True"
	I0804 01:30:59.852945  112472 pod_ready.go:81] duration metric: took 7.648649ms for pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.852954  112472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.853003  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ddb5m
	I0804 01:30:59.853010  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.853017  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.853020  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.855735  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.856353  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:30:59.856367  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.856376  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.856383  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.862863  112472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 01:30:59.863413  112472 pod_ready.go:92] pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace has status "Ready":"True"
	I0804 01:30:59.863439  112472 pod_ready.go:81] duration metric: took 10.477352ms for pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.863452  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.863522  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889
	I0804 01:30:59.863532  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.863543  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.863548  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.865872  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.866493  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:30:59.866511  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.866519  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.866522  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.868836  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.869558  112472 pod_ready.go:92] pod "etcd-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:30:59.869582  112472 pod_ready.go:81] duration metric: took 6.121811ms for pod "etcd-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.869594  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.869702  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889-m02
	I0804 01:30:59.869716  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.869726  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.869733  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.872935  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:59.873789  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:30:59.873803  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.873810  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.873814  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.876184  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.876681  112472 pod_ready.go:92] pod "etcd-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:30:59.876700  112472 pod_ready.go:81] duration metric: took 7.098495ms for pod "etcd-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.876711  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.009081  112472 request.go:629] Waited for 132.282502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889-m03
	I0804 01:31:00.009145  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889-m03
	I0804 01:31:00.009152  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.009160  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.009164  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.012991  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:00.209108  112472 request.go:629] Waited for 195.384298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:00.209180  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:00.209185  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.209193  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.209199  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.212249  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:00.213049  112472 pod_ready.go:92] pod "etcd-ha-998889-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:00.213072  112472 pod_ready.go:81] duration metric: took 336.352876ms for pod "etcd-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.213095  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.409304  112472 request.go:629] Waited for 196.122455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889
	I0804 01:31:00.409438  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889
	I0804 01:31:00.409453  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.409464  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.409472  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.413050  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:00.608903  112472 request.go:629] Waited for 194.997248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:00.608983  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:00.608991  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.608999  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.609006  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.612483  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:00.613128  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:00.613158  112472 pod_ready.go:81] duration metric: took 400.051229ms for pod "kube-apiserver-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.613171  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.809394  112472 request.go:629] Waited for 196.092914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m02
	I0804 01:31:00.809483  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m02
	I0804 01:31:00.809494  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.809502  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.809510  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.813330  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.008719  112472 request.go:629] Waited for 194.195442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:01.008812  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:01.008818  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.008826  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.008832  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.012244  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.013108  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:01.013127  112472 pod_ready.go:81] duration metric: took 399.947721ms for pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.013137  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.209257  112472 request.go:629] Waited for 196.041527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m03
	I0804 01:31:01.209339  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m03
	I0804 01:31:01.209347  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.209376  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.209387  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.212936  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.409296  112472 request.go:629] Waited for 195.427061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:01.409386  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:01.409393  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.409403  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.409409  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.412961  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.413564  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:01.413585  112472 pod_ready.go:81] duration metric: took 400.440867ms for pod "kube-apiserver-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.413600  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.608483  112472 request.go:629] Waited for 194.807036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889
	I0804 01:31:01.608576  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889
	I0804 01:31:01.608588  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.608599  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.608608  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.612025  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.809427  112472 request.go:629] Waited for 196.415288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:01.809528  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:01.809540  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.809552  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.809563  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.813110  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.813836  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:01.813858  112472 pod_ready.go:81] duration metric: took 400.250706ms for pod "kube-controller-manager-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.813868  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.008956  112472 request.go:629] Waited for 195.007111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m02
	I0804 01:31:02.009023  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m02
	I0804 01:31:02.009032  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.009043  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.009053  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.013144  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:31:02.209424  112472 request.go:629] Waited for 195.382799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:02.209482  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:02.209487  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.209500  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.209506  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.213058  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:02.213777  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:02.213798  112472 pod_ready.go:81] duration metric: took 399.923508ms for pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.213807  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.408974  112472 request.go:629] Waited for 195.100368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m03
	I0804 01:31:02.409073  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m03
	I0804 01:31:02.409081  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.409089  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.409093  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.412322  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:02.609305  112472 request.go:629] Waited for 196.268064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:02.609402  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:02.609411  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.609423  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.609432  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.612667  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:02.613449  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:02.613477  112472 pod_ready.go:81] duration metric: took 399.661848ms for pod "kube-controller-manager-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.613490  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-56twz" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.809542  112472 request.go:629] Waited for 195.946316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56twz
	I0804 01:31:02.809628  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56twz
	I0804 01:31:02.809640  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.809650  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.809660  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.813159  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.009497  112472 request.go:629] Waited for 195.334978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:03.009573  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:03.009580  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.009591  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.009616  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.013257  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.013965  112472 pod_ready.go:92] pod "kube-proxy-56twz" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:03.013990  112472 pod_ready.go:81] duration metric: took 400.490961ms for pod "kube-proxy-56twz" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.014001  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v4j77" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.208538  112472 request.go:629] Waited for 194.457271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4j77
	I0804 01:31:03.208640  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4j77
	I0804 01:31:03.208653  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.208664  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.208674  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.212345  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.408592  112472 request.go:629] Waited for 195.291669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:03.408692  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:03.408703  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.408711  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.408716  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.412265  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.412890  112472 pod_ready.go:92] pod "kube-proxy-v4j77" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:03.412913  112472 pod_ready.go:81] duration metric: took 398.906611ms for pod "kube-proxy-v4j77" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.412922  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wj5z9" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.609092  112472 request.go:629] Waited for 196.107713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wj5z9
	I0804 01:31:03.609176  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wj5z9
	I0804 01:31:03.609186  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.609194  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.609199  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.613145  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.809439  112472 request.go:629] Waited for 195.396824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:03.809543  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:03.809555  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.809569  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.809577  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.813455  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.814254  112472 pod_ready.go:92] pod "kube-proxy-wj5z9" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:03.814279  112472 pod_ready.go:81] duration metric: took 401.349853ms for pod "kube-proxy-wj5z9" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.814292  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.009381  112472 request.go:629] Waited for 194.978939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889
	I0804 01:31:04.009442  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889
	I0804 01:31:04.009447  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.009454  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.009460  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.012698  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:04.208984  112472 request.go:629] Waited for 195.727805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:04.209062  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:04.209067  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.209076  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.209081  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.212897  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:04.213751  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:04.213776  112472 pod_ready.go:81] duration metric: took 399.475908ms for pod "kube-scheduler-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.213786  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.408777  112472 request.go:629] Waited for 194.906433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m02
	I0804 01:31:04.408848  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m02
	I0804 01:31:04.408854  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.408861  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.408871  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.412642  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:04.609010  112472 request.go:629] Waited for 195.402222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:04.609081  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:04.609087  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.609095  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.609099  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.612847  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:04.613707  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:04.613729  112472 pod_ready.go:81] duration metric: took 399.935389ms for pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.613742  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.808754  112472 request.go:629] Waited for 194.934148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m03
	I0804 01:31:04.808829  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m03
	I0804 01:31:04.808834  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.808846  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.808849  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.812481  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:05.008793  112472 request.go:629] Waited for 195.369713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:05.008876  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:05.008882  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.008890  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.008894  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.012567  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:05.013447  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:05.013471  112472 pod_ready.go:81] duration metric: took 399.720375ms for pod "kube-scheduler-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:05.013487  112472 pod_ready.go:38] duration metric: took 5.185318039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 01:31:05.013508  112472 api_server.go:52] waiting for apiserver process to appear ...
	I0804 01:31:05.013572  112472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:31:05.031163  112472 api_server.go:72] duration metric: took 23.05865127s to wait for apiserver process to appear ...
	I0804 01:31:05.031198  112472 api_server.go:88] waiting for apiserver healthz status ...
	I0804 01:31:05.031220  112472 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0804 01:31:05.035658  112472 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0804 01:31:05.035721  112472 round_trippers.go:463] GET https://192.168.39.12:8443/version
	I0804 01:31:05.035728  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.035736  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.035742  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.036644  112472 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0804 01:31:05.036704  112472 api_server.go:141] control plane version: v1.30.3
	I0804 01:31:05.036714  112472 api_server.go:131] duration metric: took 5.510033ms to wait for apiserver health ...
	I0804 01:31:05.036724  112472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 01:31:05.209160  112472 request.go:629] Waited for 172.366452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:31:05.209257  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:31:05.209273  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.209285  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.209297  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.216801  112472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0804 01:31:05.223052  112472 system_pods.go:59] 24 kube-system pods found
	I0804 01:31:05.223085  112472 system_pods.go:61] "coredns-7db6d8ff4d-b8ds7" [b7c997bc-312e-488c-ad30-0647eb5b757e] Running
	I0804 01:31:05.223090  112472 system_pods.go:61] "coredns-7db6d8ff4d-ddb5m" [186999bf-43e4-43e7-a5dc-c84331a2f521] Running
	I0804 01:31:05.223094  112472 system_pods.go:61] "etcd-ha-998889" [82415e8c-a79b-41f3-b6b6-86e1b4e63951] Running
	I0804 01:31:05.223097  112472 system_pods.go:61] "etcd-ha-998889-m02" [0c0646fc-8ef5-47e1-a6c2-59708d88fa7d] Running
	I0804 01:31:05.223100  112472 system_pods.go:61] "etcd-ha-998889-m03" [6d4964c1-5a0a-4f37-900d-5b7746fab7ec] Running
	I0804 01:31:05.223103  112472 system_pods.go:61] "kindnet-gc22h" [db5d63c3-4231-45ae-a2e2-b48fbf64be91] Running
	I0804 01:31:05.223106  112472 system_pods.go:61] "kindnet-mm9t2" [46ee5b5b-81d3-4acc-aee0-d57be09c3858] Running
	I0804 01:31:05.223109  112472 system_pods.go:61] "kindnet-rsp5h" [7db6f750-c2f4-404f-8ca1-49365012789d] Running
	I0804 01:31:05.223112  112472 system_pods.go:61] "kube-apiserver-ha-998889" [dc07f6be-b73f-44ce-a196-ad51d034ae1d] Running
	I0804 01:31:05.223115  112472 system_pods.go:61] "kube-apiserver-ha-998889-m02" [b462bad7-5f36-491b-a021-de1943fa91ea] Running
	I0804 01:31:05.223118  112472 system_pods.go:61] "kube-apiserver-ha-998889-m03" [836845ff-1fd9-45a1-b3d1-2bccf0cde74a] Running
	I0804 01:31:05.223122  112472 system_pods.go:61] "kube-controller-manager-ha-998889" [5680756c-077a-4115-abc9-7495c9b5c725] Running
	I0804 01:31:05.223125  112472 system_pods.go:61] "kube-controller-manager-ha-998889-m02" [17fae882-3021-45ef-8e54-70097546e0dc] Running
	I0804 01:31:05.223128  112472 system_pods.go:61] "kube-controller-manager-ha-998889-m03" [ab317268-bc19-4dfd-bcd3-f1fc493b337e] Running
	I0804 01:31:05.223131  112472 system_pods.go:61] "kube-proxy-56twz" [e9fc726d-cf1c-44a8-839e-84b90f69609f] Running
	I0804 01:31:05.223135  112472 system_pods.go:61] "kube-proxy-v4j77" [87ac4988-17c6-4628-afde-1e1a65c8b66e] Running
	I0804 01:31:05.223139  112472 system_pods.go:61] "kube-proxy-wj5z9" [36f91407-7b5a-4101-b7a9-9adbf18a209f] Running
	I0804 01:31:05.223144  112472 system_pods.go:61] "kube-scheduler-ha-998889" [2314946f-1cc5-4501-a024-f91be0ef6af9] Running
	I0804 01:31:05.223147  112472 system_pods.go:61] "kube-scheduler-ha-998889-m02" [895df81c-737f-430a-bbd5-9536fde88fa7] Running
	I0804 01:31:05.223161  112472 system_pods.go:61] "kube-scheduler-ha-998889-m03" [cb00cbab-4deb-4c0f-a4e5-9f853235c528] Running
	I0804 01:31:05.223167  112472 system_pods.go:61] "kube-vip-ha-998889" [1baf4284-e439-4cfa-b46f-dc618a37580b] Running
	I0804 01:31:05.223170  112472 system_pods.go:61] "kube-vip-ha-998889-m02" [379a3823-ba56-4127-a13b-133808a3c1a3] Running
	I0804 01:31:05.223173  112472 system_pods.go:61] "kube-vip-ha-998889-m03" [b7fea609-e938-4537-973d-bd18eaffe449] Running
	I0804 01:31:05.223175  112472 system_pods.go:61] "storage-provisioner" [b2eb4a37-052e-4e8e-9b0d-d58847698eeb] Running
	I0804 01:31:05.223182  112472 system_pods.go:74] duration metric: took 186.451699ms to wait for pod list to return data ...
	I0804 01:31:05.223193  112472 default_sa.go:34] waiting for default service account to be created ...
	I0804 01:31:05.408565  112472 request.go:629] Waited for 185.28427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0804 01:31:05.408629  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0804 01:31:05.408635  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.408643  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.408648  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.412153  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:05.412312  112472 default_sa.go:45] found service account: "default"
	I0804 01:31:05.412366  112472 default_sa.go:55] duration metric: took 189.127271ms for default service account to be created ...
	I0804 01:31:05.412383  112472 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 01:31:05.609477  112472 request.go:629] Waited for 196.990181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:31:05.609540  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:31:05.609545  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.609556  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.609566  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.617750  112472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0804 01:31:05.623803  112472 system_pods.go:86] 24 kube-system pods found
	I0804 01:31:05.623837  112472 system_pods.go:89] "coredns-7db6d8ff4d-b8ds7" [b7c997bc-312e-488c-ad30-0647eb5b757e] Running
	I0804 01:31:05.623843  112472 system_pods.go:89] "coredns-7db6d8ff4d-ddb5m" [186999bf-43e4-43e7-a5dc-c84331a2f521] Running
	I0804 01:31:05.623848  112472 system_pods.go:89] "etcd-ha-998889" [82415e8c-a79b-41f3-b6b6-86e1b4e63951] Running
	I0804 01:31:05.623852  112472 system_pods.go:89] "etcd-ha-998889-m02" [0c0646fc-8ef5-47e1-a6c2-59708d88fa7d] Running
	I0804 01:31:05.623857  112472 system_pods.go:89] "etcd-ha-998889-m03" [6d4964c1-5a0a-4f37-900d-5b7746fab7ec] Running
	I0804 01:31:05.623861  112472 system_pods.go:89] "kindnet-gc22h" [db5d63c3-4231-45ae-a2e2-b48fbf64be91] Running
	I0804 01:31:05.623865  112472 system_pods.go:89] "kindnet-mm9t2" [46ee5b5b-81d3-4acc-aee0-d57be09c3858] Running
	I0804 01:31:05.623869  112472 system_pods.go:89] "kindnet-rsp5h" [7db6f750-c2f4-404f-8ca1-49365012789d] Running
	I0804 01:31:05.623873  112472 system_pods.go:89] "kube-apiserver-ha-998889" [dc07f6be-b73f-44ce-a196-ad51d034ae1d] Running
	I0804 01:31:05.623877  112472 system_pods.go:89] "kube-apiserver-ha-998889-m02" [b462bad7-5f36-491b-a021-de1943fa91ea] Running
	I0804 01:31:05.623881  112472 system_pods.go:89] "kube-apiserver-ha-998889-m03" [836845ff-1fd9-45a1-b3d1-2bccf0cde74a] Running
	I0804 01:31:05.623885  112472 system_pods.go:89] "kube-controller-manager-ha-998889" [5680756c-077a-4115-abc9-7495c9b5c725] Running
	I0804 01:31:05.623889  112472 system_pods.go:89] "kube-controller-manager-ha-998889-m02" [17fae882-3021-45ef-8e54-70097546e0dc] Running
	I0804 01:31:05.623894  112472 system_pods.go:89] "kube-controller-manager-ha-998889-m03" [ab317268-bc19-4dfd-bcd3-f1fc493b337e] Running
	I0804 01:31:05.623902  112472 system_pods.go:89] "kube-proxy-56twz" [e9fc726d-cf1c-44a8-839e-84b90f69609f] Running
	I0804 01:31:05.623909  112472 system_pods.go:89] "kube-proxy-v4j77" [87ac4988-17c6-4628-afde-1e1a65c8b66e] Running
	I0804 01:31:05.623912  112472 system_pods.go:89] "kube-proxy-wj5z9" [36f91407-7b5a-4101-b7a9-9adbf18a209f] Running
	I0804 01:31:05.623916  112472 system_pods.go:89] "kube-scheduler-ha-998889" [2314946f-1cc5-4501-a024-f91be0ef6af9] Running
	I0804 01:31:05.623920  112472 system_pods.go:89] "kube-scheduler-ha-998889-m02" [895df81c-737f-430a-bbd5-9536fde88fa7] Running
	I0804 01:31:05.623924  112472 system_pods.go:89] "kube-scheduler-ha-998889-m03" [cb00cbab-4deb-4c0f-a4e5-9f853235c528] Running
	I0804 01:31:05.623927  112472 system_pods.go:89] "kube-vip-ha-998889" [1baf4284-e439-4cfa-b46f-dc618a37580b] Running
	I0804 01:31:05.623930  112472 system_pods.go:89] "kube-vip-ha-998889-m02" [379a3823-ba56-4127-a13b-133808a3c1a3] Running
	I0804 01:31:05.623934  112472 system_pods.go:89] "kube-vip-ha-998889-m03" [b7fea609-e938-4537-973d-bd18eaffe449] Running
	I0804 01:31:05.623937  112472 system_pods.go:89] "storage-provisioner" [b2eb4a37-052e-4e8e-9b0d-d58847698eeb] Running
	I0804 01:31:05.623944  112472 system_pods.go:126] duration metric: took 211.555603ms to wait for k8s-apps to be running ...
	I0804 01:31:05.623953  112472 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 01:31:05.623998  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:31:05.641051  112472 system_svc.go:56] duration metric: took 17.086327ms WaitForService to wait for kubelet
	I0804 01:31:05.641083  112472 kubeadm.go:582] duration metric: took 23.668574748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 01:31:05.641103  112472 node_conditions.go:102] verifying NodePressure condition ...
	I0804 01:31:05.808449  112472 request.go:629] Waited for 167.265829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes
	I0804 01:31:05.808512  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes
	I0804 01:31:05.808518  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.808525  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.808529  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.812316  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:05.813391  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:31:05.813419  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:31:05.813437  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:31:05.813443  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:31:05.813448  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:31:05.813453  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:31:05.813458  112472 node_conditions.go:105] duration metric: took 172.35042ms to run NodePressure ...
	I0804 01:31:05.813478  112472 start.go:241] waiting for startup goroutines ...
	I0804 01:31:05.813503  112472 start.go:255] writing updated cluster config ...
	I0804 01:31:05.813886  112472 ssh_runner.go:195] Run: rm -f paused
	I0804 01:31:05.867511  112472 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 01:31:05.869763  112472 out.go:177] * Done! kubectl is now configured to use "ha-998889" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.170282254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735286170254829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=725f7068-e677-473f-a033-d62141820215 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.171014169Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=142a3839-0166-4a82-a334-845be892b350 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.171077817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=142a3839-0166-4a82-a334-845be892b350 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.171349152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735070152311783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927897758714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927838974629,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184,PodSandboxId:ba6b4eda679dcdb869f668ee54e13bcb005892453b7d93545d9fb1187272c1ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722734927727482836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722734915708378127,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172273491
0732540795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a,PodSandboxId:75eeb21e3e26ad4a2f88549b1d69b2d7eea9f374a8c9bcc9498199c375909d55,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227348929
80663215,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353262e960949a9cd83fabcbd9d9ed77,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722734890252370246,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722734890219525088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977,PodSandboxId:35f3b8346489b7b08460445329778ede5fe380943acc3597f287e48353454609,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722734890201105995,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318,PodSandboxId:fdd7687c140dbd7f65cfbe94f261409b7bc235d31c2b6b18b54fa5d1823848b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722734890140566048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=142a3839-0166-4a82-a334-845be892b350 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.211073349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=adf55773-1b48-4d1c-8618-5e294a5ff285 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.211166072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=adf55773-1b48-4d1c-8618-5e294a5ff285 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.212575078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ba2c06e-a75c-4ee2-ab4a-56e2144fc0b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.213107325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735286213084293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ba2c06e-a75c-4ee2-ab4a-56e2144fc0b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.213660616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02b8f942-798d-421f-b95a-5e3fb4c70df4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.213714247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02b8f942-798d-421f-b95a-5e3fb4c70df4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.213979265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735070152311783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927897758714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927838974629,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184,PodSandboxId:ba6b4eda679dcdb869f668ee54e13bcb005892453b7d93545d9fb1187272c1ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722734927727482836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722734915708378127,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172273491
0732540795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a,PodSandboxId:75eeb21e3e26ad4a2f88549b1d69b2d7eea9f374a8c9bcc9498199c375909d55,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227348929
80663215,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353262e960949a9cd83fabcbd9d9ed77,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722734890252370246,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722734890219525088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977,PodSandboxId:35f3b8346489b7b08460445329778ede5fe380943acc3597f287e48353454609,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722734890201105995,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318,PodSandboxId:fdd7687c140dbd7f65cfbe94f261409b7bc235d31c2b6b18b54fa5d1823848b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722734890140566048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02b8f942-798d-421f-b95a-5e3fb4c70df4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.261753526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e71b7950-d5e0-4edd-a6c3-ff1d4dc14b57 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.261891773Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e71b7950-d5e0-4edd-a6c3-ff1d4dc14b57 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.262998626Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cba9e72c-99bc-4aa1-b76a-ad6fbb78f47a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.263798850Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735286263774789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cba9e72c-99bc-4aa1-b76a-ad6fbb78f47a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.264379257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d11ae355-9494-46a4-8ad5-a14e51e6756b name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.264448011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d11ae355-9494-46a4-8ad5-a14e51e6756b name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.264786552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735070152311783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927897758714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927838974629,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184,PodSandboxId:ba6b4eda679dcdb869f668ee54e13bcb005892453b7d93545d9fb1187272c1ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722734927727482836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722734915708378127,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172273491
0732540795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a,PodSandboxId:75eeb21e3e26ad4a2f88549b1d69b2d7eea9f374a8c9bcc9498199c375909d55,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227348929
80663215,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353262e960949a9cd83fabcbd9d9ed77,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722734890252370246,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722734890219525088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977,PodSandboxId:35f3b8346489b7b08460445329778ede5fe380943acc3597f287e48353454609,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722734890201105995,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318,PodSandboxId:fdd7687c140dbd7f65cfbe94f261409b7bc235d31c2b6b18b54fa5d1823848b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722734890140566048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d11ae355-9494-46a4-8ad5-a14e51e6756b name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.303666836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=858b298b-3cc6-4cb8-b06d-3724f68ff108 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.303739492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=858b298b-3cc6-4cb8-b06d-3724f68ff108 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.305179132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54f7d89c-4fb9-49a1-9c23-f72f71d98897 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.305960861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735286305936785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54f7d89c-4fb9-49a1-9c23-f72f71d98897 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.306477795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80f933ff-3f18-4502-ad02-9729f22541e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.306542701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80f933ff-3f18-4502-ad02-9729f22541e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:34:46 ha-998889 crio[686]: time="2024-08-04 01:34:46.306796040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735070152311783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927897758714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927838974629,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184,PodSandboxId:ba6b4eda679dcdb869f668ee54e13bcb005892453b7d93545d9fb1187272c1ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722734927727482836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722734915708378127,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172273491
0732540795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a,PodSandboxId:75eeb21e3e26ad4a2f88549b1d69b2d7eea9f374a8c9bcc9498199c375909d55,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227348929
80663215,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353262e960949a9cd83fabcbd9d9ed77,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722734890252370246,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722734890219525088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977,PodSandboxId:35f3b8346489b7b08460445329778ede5fe380943acc3597f287e48353454609,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722734890201105995,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318,PodSandboxId:fdd7687c140dbd7f65cfbe94f261409b7bc235d31c2b6b18b54fa5d1823848b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722734890140566048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80f933ff-3f18-4502-ad02-9729f22541e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1bb7230a66693       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   5b4550fd8d43d       busybox-fc5497c4f-v468b
	7ce1fc9d2ceb3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   3037e05c8f0db       coredns-7db6d8ff4d-b8ds7
	fe75909603216       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   a3cc1795993d6       coredns-7db6d8ff4d-ddb5m
	426453d5275e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   ba6b4eda679dc       storage-provisioner
	e987e973e97a5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   120c9a2eb52aa       kindnet-gc22h
	e32fb23a61d2d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   9689d6db72b02       kube-proxy-56twz
	95795d7d25530       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   75eeb21e3e26a       kube-vip-ha-998889
	cbd934bafbbf1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   580e42f37b240       etcd-ha-998889
	3f264e5c2143d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   c25b0800264cf       kube-scheduler-ha-998889
	0c31b954330c4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   35f3b8346489b       kube-controller-manager-ha-998889
	8d16347be7d62       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   fdd7687c140db       kube-apiserver-ha-998889
	
	
	==> coredns [7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947] <==
	[INFO] 10.244.1.2:49038 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013018026s
	[INFO] 10.244.0.4:40557 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000085015s
	[INFO] 10.244.1.2:53619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211794s
	[INFO] 10.244.1.2:44820 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171002s
	[INFO] 10.244.1.2:54493 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154283s
	[INFO] 10.244.1.2:45366 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000188537s
	[INFO] 10.244.1.2:42179 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223485s
	[INFO] 10.244.2.2:48925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000257001s
	[INFO] 10.244.2.2:46133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001441239s
	[INFO] 10.244.2.2:40620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108193s
	[INFO] 10.244.2.2:45555 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071897s
	[INFO] 10.244.0.4:57133 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007622s
	[INFO] 10.244.0.4:45128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012024s
	[INFO] 10.244.0.4:33660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084733s
	[INFO] 10.244.1.2:48368 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133283s
	[INFO] 10.244.1.2:42909 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130327s
	[INFO] 10.244.1.2:54181 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067193s
	[INFO] 10.244.2.2:36881 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125847s
	[INFO] 10.244.2.2:52948 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090317s
	[INFO] 10.244.1.2:34080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132803s
	[INFO] 10.244.1.2:38625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147078s
	[INFO] 10.244.2.2:41049 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205078s
	[INFO] 10.244.2.2:47520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094037s
	[INFO] 10.244.2.2:48004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000211339s
	[INFO] 10.244.0.4:52706 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087998s
	
	
	==> coredns [fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9] <==
	[INFO] 10.244.1.2:57793 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00333282s
	[INFO] 10.244.1.2:54028 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012772192s
	[INFO] 10.244.1.2:49028 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171231s
	[INFO] 10.244.2.2:43384 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001982538s
	[INFO] 10.244.2.2:59450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165578s
	[INFO] 10.244.2.2:44599 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132406s
	[INFO] 10.244.2.2:38280 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086968s
	[INFO] 10.244.0.4:52340 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111664s
	[INFO] 10.244.0.4:55794 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001989197s
	[INFO] 10.244.0.4:56345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371219s
	[INFO] 10.244.0.4:50778 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090371s
	[INFO] 10.244.0.4:47116 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132729s
	[INFO] 10.244.1.2:54780 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104255s
	[INFO] 10.244.2.2:52086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092312s
	[INFO] 10.244.2.2:36096 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008133s
	[INFO] 10.244.0.4:35645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084037s
	[INFO] 10.244.0.4:57031 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00004652s
	[INFO] 10.244.0.4:53264 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005834s
	[INFO] 10.244.0.4:52476 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111362s
	[INFO] 10.244.1.2:39754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000161853s
	[INFO] 10.244.1.2:44320 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018965s
	[INFO] 10.244.2.2:58250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133355s
	[INFO] 10.244.0.4:34248 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137551s
	[INFO] 10.244.0.4:46858 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082831s
	[INFO] 10.244.0.4:52801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017483s
	
	
	==> describe nodes <==
	Name:               ha-998889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T01_28_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:28:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:34:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:31:19 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:31:19 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:31:19 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:31:19 +0000   Sun, 04 Aug 2024 01:28:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-998889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa9bfc18a8dd4a25ae5d0b652cb98f91
	  System UUID:                fa9bfc18-a8dd-4a25-ae5d-0b652cb98f91
	  Boot ID:                    ddede9e4-4547-41a5-820a-f6568caf06a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v468b              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 coredns-7db6d8ff4d-b8ds7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m16s
	  kube-system                 coredns-7db6d8ff4d-ddb5m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m16s
	  kube-system                 etcd-ha-998889                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m30s
	  kube-system                 kindnet-gc22h                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m17s
	  kube-system                 kube-apiserver-ha-998889             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-controller-manager-ha-998889    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-proxy-56twz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-scheduler-ha-998889             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-vip-ha-998889                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m15s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m37s (x7 over 6m37s)  kubelet          Node ha-998889 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m37s (x8 over 6m37s)  kubelet          Node ha-998889 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x8 over 6m37s)  kubelet          Node ha-998889 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m30s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m30s                  kubelet          Node ha-998889 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s                  kubelet          Node ha-998889 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s                  kubelet          Node ha-998889 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal  NodeReady                5m59s                  kubelet          Node ha-998889 status is now: NodeReady
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	
	
	Name:               ha-998889-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_29_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:29:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:32:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 04 Aug 2024 01:31:24 +0000   Sun, 04 Aug 2024 01:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 04 Aug 2024 01:31:24 +0000   Sun, 04 Aug 2024 01:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 04 Aug 2024 01:31:24 +0000   Sun, 04 Aug 2024 01:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 04 Aug 2024 01:31:24 +0000   Sun, 04 Aug 2024 01:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    ha-998889-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8754ed7ba6c04d5d808bf540e4c5a093
	  System UUID:                8754ed7b-a6c0-4d5d-808b-f540e4c5a093
	  Boot ID:                    aab72127-3c35-4594-8bb2-579116036f9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7jqps                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 etcd-ha-998889-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kindnet-mm9t2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m26s
	  kube-system                 kube-apiserver-ha-998889-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-controller-manager-ha-998889-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-proxy-v4j77                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-scheduler-ha-998889-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-vip-ha-998889-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m26s)  kubelet          Node ha-998889-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m26s)  kubelet          Node ha-998889-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m26s)  kubelet          Node ha-998889-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-998889-m02 status is now: NodeNotReady
	
	
	Name:               ha-998889-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_30_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:30:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:34:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:31:39 +0000   Sun, 04 Aug 2024 01:30:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:31:39 +0000   Sun, 04 Aug 2024 01:30:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:31:39 +0000   Sun, 04 Aug 2024 01:30:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:31:39 +0000   Sun, 04 Aug 2024 01:30:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-998889-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 49ee34ab17a14b2ba68118c94f92f005
	  System UUID:                49ee34ab-17a1-4b2b-a681-18c94f92f005
	  Boot ID:                    21c0e6a6-ac5b-4e27-887c-e134468a610a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8wnwt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 etcd-ha-998889-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m7s
	  kube-system                 kindnet-rsp5h                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m9s
	  kube-system                 kube-apiserver-ha-998889-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-controller-manager-ha-998889-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-proxy-wj5z9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-scheduler-ha-998889-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-vip-ha-998889-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-998889-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-998889-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-998889-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m8s                 node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	  Normal  RegisteredNode           3m51s                node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	
	
	Name:               ha-998889-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_31_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:31:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:32:14 +0000   Sun, 04 Aug 2024 01:31:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:32:14 +0000   Sun, 04 Aug 2024 01:31:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:32:14 +0000   Sun, 04 Aug 2024 01:31:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:32:14 +0000   Sun, 04 Aug 2024 01:32:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-998889-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e86557b9788446aca3bd64c7bcc82957
	  System UUID:                e86557b9-7884-46ac-a3bd-64c7bcc82957
	  Boot ID:                    1141c25d-ddf9-401d-80e6-f074ce6278a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5cv7z       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m3s
	  kube-system                 kube-proxy-9qdn6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  Starting                 3m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-998889-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-998889-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-998889-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-998889-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 4 01:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050286] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040198] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.778082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.532172] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.604472] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.869407] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.063774] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058921] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.163748] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.144819] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.274744] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[Aug 4 01:28] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +0.067193] kauditd_printk_skb: 136 callbacks suppressed
	[  +4.231084] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.024644] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.031121] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.102027] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.498623] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.120089] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 4 01:29] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6] <==
	{"level":"warn","ts":"2024-08-04T01:34:46.600915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.615682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.626248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.636149Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.640502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.644806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.652431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.658573Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.66522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.668263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.669231Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.677223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.695113Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.715123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.73017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.737166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.744161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.751347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.766452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.770346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.784513Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.785473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.795391Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.857605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:34:46.85975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 01:34:46 up 7 min,  0 users,  load average: 0.41, 0.36, 0.20
	Linux ha-998889 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957] <==
	I0804 01:34:16.899968       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:34:26.899924       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:34:26.899953       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:34:26.900113       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:34:26.900123       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:34:26.900229       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:34:26.900265       1 main.go:299] handling current node
	I0804 01:34:26.900280       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:34:26.900287       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:34:36.891401       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:34:36.891443       1 main.go:299] handling current node
	I0804 01:34:36.891458       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:34:36.891463       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:34:36.892991       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:34:36.893027       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:34:36.893168       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:34:36.893196       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:34:46.891524       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:34:46.891566       1 main.go:299] handling current node
	I0804 01:34:46.891585       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:34:46.891591       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:34:46.891710       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:34:46.891715       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:34:46.891757       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:34:46.891761       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318] <==
	I0804 01:28:29.592911       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0804 01:28:29.646614       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0804 01:29:05.872124       1 trace.go:236] Trace[178944675]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.12,type:*v1.Endpoints,resource:apiServerIPInfo (04-Aug-2024 01:29:05.313) (total time: 558ms):
	Trace[178944675]: ---"initial value restored" 169ms (01:29:05.483)
	Trace[178944675]: ---"Transaction prepared" 128ms (01:29:05.611)
	Trace[178944675]: ---"Txn call completed" 260ms (01:29:05.872)
	Trace[178944675]: [558.473332ms] [558.473332ms] END
	E0804 01:31:11.707811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34952: use of closed network connection
	E0804 01:31:11.909623       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34958: use of closed network connection
	E0804 01:31:12.098500       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34984: use of closed network connection
	E0804 01:31:12.301392       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35012: use of closed network connection
	E0804 01:31:12.498051       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35028: use of closed network connection
	E0804 01:31:12.683814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35054: use of closed network connection
	E0804 01:31:12.859058       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35082: use of closed network connection
	E0804 01:31:13.046392       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35100: use of closed network connection
	E0804 01:31:13.254630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35114: use of closed network connection
	E0804 01:31:13.563457       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35128: use of closed network connection
	E0804 01:31:13.742693       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35146: use of closed network connection
	E0804 01:31:13.942191       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35162: use of closed network connection
	E0804 01:31:14.121659       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35174: use of closed network connection
	E0804 01:31:14.301015       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35188: use of closed network connection
	E0804 01:31:14.483485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35204: use of closed network connection
	I0804 01:31:46.648788       1 trace.go:236] Trace[117369386]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:28b6d22f-aae3-4d5c-b499-327f8ad98fed,client:192.168.39.183,api-group:,api-version:v1,name:kube-proxy-thr67,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-thr67,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:DELETE (04-Aug-2024 01:31:45.827) (total time: 821ms):
	Trace[117369386]: ---"Object deleted from database" 383ms (01:31:46.648)
	Trace[117369386]: [821.135533ms] [821.135533ms] END
	
	
	==> kube-controller-manager [0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977] <==
	I0804 01:30:38.987640       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-998889-m03"
	I0804 01:31:06.800547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.04577ms"
	I0804 01:31:06.837340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.713966ms"
	I0804 01:31:06.837476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.23µs"
	I0804 01:31:06.838216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.963µs"
	I0804 01:31:06.841167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.278µs"
	I0804 01:31:06.947939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.599132ms"
	I0804 01:31:07.171116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="222.926535ms"
	I0804 01:31:07.217963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.786157ms"
	I0804 01:31:07.218619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.556µs"
	I0804 01:31:07.826554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.639µs"
	I0804 01:31:10.206264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.321251ms"
	I0804 01:31:10.206339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.205µs"
	I0804 01:31:10.531240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.132371ms"
	I0804 01:31:10.531328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.181µs"
	I0804 01:31:11.249118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.716992ms"
	I0804 01:31:11.249310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.555µs"
	E0804 01:31:43.715288       1 certificate_controller.go:146] Sync csr-8dqlk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-8dqlk": the object has been modified; please apply your changes to the latest version and try again
	I0804 01:31:43.964799       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-998889-m04\" does not exist"
	I0804 01:31:43.981389       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-998889-m04" podCIDRs=["10.244.3.0/24"]
	I0804 01:31:44.000233       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-998889-m04"
	I0804 01:32:04.866485       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-998889-m04"
	I0804 01:32:59.031401       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-998889-m04"
	I0804 01:32:59.137628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.081206ms"
	I0804 01:32:59.140330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.396µs"
	
	
	==> kube-proxy [e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372] <==
	I0804 01:28:30.963483       1 server_linux.go:69] "Using iptables proxy"
	I0804 01:28:30.980587       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	I0804 01:28:31.031710       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 01:28:31.031766       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 01:28:31.031782       1 server_linux.go:165] "Using iptables Proxier"
	I0804 01:28:31.038022       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 01:28:31.038663       1 server.go:872] "Version info" version="v1.30.3"
	I0804 01:28:31.038747       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:28:31.040962       1 config.go:192] "Starting service config controller"
	I0804 01:28:31.041184       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 01:28:31.041290       1 config.go:101] "Starting endpoint slice config controller"
	I0804 01:28:31.041313       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 01:28:31.043474       1 config.go:319] "Starting node config controller"
	I0804 01:28:31.043567       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 01:28:31.141930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 01:28:31.141960       1 shared_informer.go:320] Caches are synced for service config
	I0804 01:28:31.143968       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df] <==
	I0804 01:30:37.910750       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rsp5h" node="ha-998889-m03"
	E0804 01:30:37.918545       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wj5z9\": pod kube-proxy-wj5z9 is already assigned to node \"ha-998889-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wj5z9" node="ha-998889-m03"
	E0804 01:30:37.919601       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 36f91407-7b5a-4101-b7a9-9adbf18a209f(kube-system/kube-proxy-wj5z9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wj5z9"
	E0804 01:30:37.919740       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wj5z9\": pod kube-proxy-wj5z9 is already assigned to node \"ha-998889-m03\"" pod="kube-system/kube-proxy-wj5z9"
	I0804 01:30:37.919824       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wj5z9" node="ha-998889-m03"
	E0804 01:31:06.770278       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8wnwt\": pod busybox-fc5497c4f-8wnwt is already assigned to node \"ha-998889-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-8wnwt" node="ha-998889-m03"
	E0804 01:31:06.770619       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7668d0a2-3740-4ab0-aa7b-60b70fee82fc(default/busybox-fc5497c4f-8wnwt) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-8wnwt"
	E0804 01:31:06.770767       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8wnwt\": pod busybox-fc5497c4f-8wnwt is already assigned to node \"ha-998889-m03\"" pod="default/busybox-fc5497c4f-8wnwt"
	I0804 01:31:06.770966       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-8wnwt" node="ha-998889-m03"
	E0804 01:31:06.819451       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v468b\": pod busybox-fc5497c4f-v468b is already assigned to node \"ha-998889\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-v468b" node="ha-998889"
	E0804 01:31:06.819751       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c(default/busybox-fc5497c4f-v468b) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-v468b"
	E0804 01:31:06.819966       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v468b\": pod busybox-fc5497c4f-v468b is already assigned to node \"ha-998889\"" pod="default/busybox-fc5497c4f-v468b"
	I0804 01:31:06.820439       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-v468b" node="ha-998889"
	E0804 01:31:44.050478       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5cv7z\": pod kindnet-5cv7z is already assigned to node \"ha-998889-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5cv7z" node="ha-998889-m04"
	E0804 01:31:44.050568       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6e18a7fd-57f2-4672-8c67-bde831c5fce7(kube-system/kindnet-5cv7z) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5cv7z"
	E0804 01:31:44.050600       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5cv7z\": pod kindnet-5cv7z is already assigned to node \"ha-998889-m04\"" pod="kube-system/kindnet-5cv7z"
	I0804 01:31:44.050635       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5cv7z" node="ha-998889-m04"
	E0804 01:31:44.051326       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9qdn6\": pod kube-proxy-9qdn6 is already assigned to node \"ha-998889-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9qdn6" node="ha-998889-m04"
	E0804 01:31:44.051400       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod aae55e56-e5f1-4ce0-9427-eaf1ae449bee(kube-system/kube-proxy-9qdn6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9qdn6"
	E0804 01:31:44.051418       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9qdn6\": pod kube-proxy-9qdn6 is already assigned to node \"ha-998889-m04\"" pod="kube-system/kube-proxy-9qdn6"
	I0804 01:31:44.051440       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9qdn6" node="ha-998889-m04"
	E0804 01:31:44.221543       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-thr67\": pod kube-proxy-thr67 is already assigned to node \"ha-998889-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-thr67" node="ha-998889-m04"
	E0804 01:31:44.221899       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 777f50c1-032c-4f42-82e3-50a8bd8e1302(kube-system/kube-proxy-thr67) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-thr67"
	E0804 01:31:44.223222       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-thr67\": pod kube-proxy-thr67 is already assigned to node \"ha-998889-m04\"" pod="kube-system/kube-proxy-thr67"
	I0804 01:31:44.223375       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-thr67" node="ha-998889-m04"
	
	
	==> kubelet <==
	Aug 04 01:30:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:30:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:31:06 ha-998889 kubelet[1372]: I0804 01:31:06.787553    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b8ds7" podStartSLOduration=156.787506433 podStartE2EDuration="2m36.787506433s" podCreationTimestamp="2024-08-04 01:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-04 01:28:48.710222472 +0000 UTC m=+32.471167810" watchObservedRunningTime="2024-08-04 01:31:06.787506433 +0000 UTC m=+170.548451771"
	Aug 04 01:31:06 ha-998889 kubelet[1372]: I0804 01:31:06.788166    1372 topology_manager.go:215] "Topology Admit Handler" podUID="c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c" podNamespace="default" podName="busybox-fc5497c4f-v468b"
	Aug 04 01:31:06 ha-998889 kubelet[1372]: I0804 01:31:06.900500    1372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh6gg\" (UniqueName: \"kubernetes.io/projected/c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c-kube-api-access-hh6gg\") pod \"busybox-fc5497c4f-v468b\" (UID: \"c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c\") " pod="default/busybox-fc5497c4f-v468b"
	Aug 04 01:31:16 ha-998889 kubelet[1372]: E0804 01:31:16.429736    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:31:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:31:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:31:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:31:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:32:16 ha-998889 kubelet[1372]: E0804 01:32:16.426361    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:32:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:32:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:32:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:32:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:33:16 ha-998889 kubelet[1372]: E0804 01:33:16.440145    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:33:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:33:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:33:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:33:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:34:16 ha-998889 kubelet[1372]: E0804 01:34:16.431125    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:34:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:34:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:34:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:34:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-998889 -n ha-998889
helpers_test.go:261: (dbg) Run:  kubectl --context ha-998889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 3 (3.205186682s)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-998889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:34:51.439600  117331 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:34:51.439717  117331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:34:51.439729  117331 out.go:304] Setting ErrFile to fd 2...
	I0804 01:34:51.439733  117331 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:34:51.439937  117331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:34:51.440141  117331 out.go:298] Setting JSON to false
	I0804 01:34:51.440166  117331 mustload.go:65] Loading cluster: ha-998889
	I0804 01:34:51.440209  117331 notify.go:220] Checking for updates...
	I0804 01:34:51.440655  117331 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:34:51.440676  117331 status.go:255] checking status of ha-998889 ...
	I0804 01:34:51.441251  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:51.441293  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:51.456796  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I0804 01:34:51.457255  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:51.457826  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:51.457847  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:51.458232  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:51.458480  117331 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:34:51.460090  117331 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:34:51.460105  117331 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:34:51.460377  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:51.460413  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:51.475209  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0804 01:34:51.475646  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:51.476106  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:51.476138  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:51.476447  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:51.476611  117331 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:34:51.479541  117331 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:51.479960  117331 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:34:51.479988  117331 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:51.480138  117331 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:34:51.480446  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:51.480489  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:51.495569  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I0804 01:34:51.495976  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:51.496409  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:51.496432  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:51.496717  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:51.496943  117331 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:34:51.497135  117331 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:51.497156  117331 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:34:51.499829  117331 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:51.500241  117331 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:34:51.500269  117331 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:51.500433  117331 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:34:51.500611  117331 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:34:51.500757  117331 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:34:51.500911  117331 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:34:51.591752  117331 ssh_runner.go:195] Run: systemctl --version
	I0804 01:34:51.599592  117331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:51.616770  117331 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:34:51.616798  117331 api_server.go:166] Checking apiserver status ...
	I0804 01:34:51.616831  117331 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:34:51.630982  117331 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0804 01:34:51.640718  117331 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:34:51.640804  117331 ssh_runner.go:195] Run: ls
	I0804 01:34:51.645214  117331 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:34:51.649545  117331 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:34:51.649572  117331 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:34:51.649582  117331 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:34:51.649597  117331 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:34:51.649883  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:51.649923  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:51.665622  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37891
	I0804 01:34:51.666047  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:51.666564  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:51.666591  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:51.666911  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:51.667149  117331 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:34:51.668736  117331 status.go:330] ha-998889-m02 host status = "Running" (err=<nil>)
	I0804 01:34:51.668752  117331 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:34:51.669033  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:51.669067  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:51.683889  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I0804 01:34:51.684391  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:51.684876  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:51.684899  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:51.685183  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:51.685333  117331 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:34:51.688211  117331 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:51.688658  117331 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:34:51.688683  117331 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:51.688827  117331 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:34:51.689181  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:51.689222  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:51.704371  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0804 01:34:51.704790  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:51.705288  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:51.705315  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:51.705622  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:51.705798  117331 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:34:51.705999  117331 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:51.706033  117331 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:34:51.708830  117331 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:51.709389  117331 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:34:51.709417  117331 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:51.709595  117331 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:34:51.709739  117331 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:34:51.709834  117331 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:34:51.710082  117331 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	W0804 01:34:54.241609  117331 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.200:22: connect: no route to host
	W0804 01:34:54.241706  117331 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0804 01:34:54.241725  117331 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:34:54.241732  117331 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0804 01:34:54.241762  117331 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:34:54.241772  117331 status.go:255] checking status of ha-998889-m03 ...
	I0804 01:34:54.242085  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:54.242127  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:54.258183  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35247
	I0804 01:34:54.258606  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:54.259075  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:54.259097  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:54.259420  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:54.259670  117331 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:34:54.261398  117331 status.go:330] ha-998889-m03 host status = "Running" (err=<nil>)
	I0804 01:34:54.261414  117331 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:34:54.261692  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:54.261724  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:54.277488  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40969
	I0804 01:34:54.278117  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:54.278720  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:54.278745  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:54.279055  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:54.279209  117331 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:34:54.281940  117331 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:54.282425  117331 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:34:54.282458  117331 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:54.282612  117331 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:34:54.283020  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:54.283064  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:54.298085  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I0804 01:34:54.298556  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:54.299079  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:54.299108  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:54.299387  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:54.299588  117331 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:34:54.299745  117331 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:54.299766  117331 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:34:54.302619  117331 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:54.303081  117331 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:34:54.303108  117331 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:54.303229  117331 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:34:54.303390  117331 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:34:54.303535  117331 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:34:54.303713  117331 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:34:54.390018  117331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:54.406044  117331 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:34:54.406077  117331 api_server.go:166] Checking apiserver status ...
	I0804 01:34:54.406129  117331 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:34:54.422163  117331 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0804 01:34:54.432629  117331 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:34:54.432686  117331 ssh_runner.go:195] Run: ls
	I0804 01:34:54.437350  117331 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:34:54.441784  117331 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:34:54.441810  117331 status.go:422] ha-998889-m03 apiserver status = Running (err=<nil>)
	I0804 01:34:54.441822  117331 status.go:257] ha-998889-m03 status: &{Name:ha-998889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:34:54.441858  117331 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:34:54.442244  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:54.442291  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:54.458168  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41341
	I0804 01:34:54.458618  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:54.459059  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:54.459079  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:54.459419  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:54.459600  117331 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:34:54.461062  117331 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:34:54.461079  117331 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:34:54.461381  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:54.461422  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:54.476214  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
	I0804 01:34:54.476687  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:54.477117  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:54.477138  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:54.477476  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:54.477629  117331 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:34:54.480236  117331 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:54.480687  117331 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:34:54.480724  117331 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:54.480858  117331 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:34:54.481223  117331 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:54.481262  117331 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:54.496835  117331 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33629
	I0804 01:34:54.497256  117331 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:54.497806  117331 main.go:141] libmachine: Using API Version  1
	I0804 01:34:54.497827  117331 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:54.498140  117331 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:54.498361  117331 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:34:54.498535  117331 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:54.498558  117331 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:34:54.501141  117331 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:54.501561  117331 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:34:54.501598  117331 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:54.501731  117331 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:34:54.501919  117331 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:34:54.502114  117331 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:34:54.502250  117331 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:34:54.585693  117331 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:54.601292  117331 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 3 (2.547700505s)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-998889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:34:55.170943  117431 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:34:55.171237  117431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:34:55.171248  117431 out.go:304] Setting ErrFile to fd 2...
	I0804 01:34:55.171253  117431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:34:55.171442  117431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:34:55.171613  117431 out.go:298] Setting JSON to false
	I0804 01:34:55.171639  117431 mustload.go:65] Loading cluster: ha-998889
	I0804 01:34:55.171682  117431 notify.go:220] Checking for updates...
	I0804 01:34:55.172225  117431 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:34:55.172247  117431 status.go:255] checking status of ha-998889 ...
	I0804 01:34:55.172684  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:55.172766  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:55.190489  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34265
	I0804 01:34:55.191012  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:55.191618  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:55.191640  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:55.192009  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:55.192196  117431 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:34:55.193962  117431 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:34:55.193983  117431 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:34:55.194296  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:55.194343  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:55.209748  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39931
	I0804 01:34:55.210195  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:55.210738  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:55.210770  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:55.211182  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:55.211381  117431 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:34:55.214420  117431 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:55.214823  117431 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:34:55.214850  117431 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:55.215062  117431 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:34:55.215374  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:55.215430  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:55.231216  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40783
	I0804 01:34:55.231680  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:55.232281  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:55.232314  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:55.232675  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:55.232882  117431 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:34:55.233082  117431 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:55.233107  117431 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:34:55.236769  117431 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:55.237292  117431 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:34:55.237339  117431 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:55.237567  117431 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:34:55.237765  117431 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:34:55.237926  117431 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:34:55.238075  117431 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:34:55.326033  117431 ssh_runner.go:195] Run: systemctl --version
	I0804 01:34:55.332744  117431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:55.348256  117431 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:34:55.348289  117431 api_server.go:166] Checking apiserver status ...
	I0804 01:34:55.348326  117431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:34:55.372485  117431 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0804 01:34:55.382308  117431 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:34:55.382387  117431 ssh_runner.go:195] Run: ls
	I0804 01:34:55.387070  117431 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:34:55.393276  117431 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:34:55.393308  117431 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:34:55.393320  117431 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:34:55.393336  117431 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:34:55.393760  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:55.393803  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:55.409061  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43509
	I0804 01:34:55.409502  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:55.410007  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:55.410032  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:55.410380  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:55.410563  117431 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:34:55.412218  117431 status.go:330] ha-998889-m02 host status = "Running" (err=<nil>)
	I0804 01:34:55.412237  117431 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:34:55.412536  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:55.412588  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:55.428911  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45747
	I0804 01:34:55.429416  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:55.429914  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:55.429937  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:55.430288  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:55.430468  117431 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:34:55.433101  117431 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:55.433518  117431 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:34:55.433550  117431 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:55.433657  117431 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:34:55.433944  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:55.433983  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:55.449930  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
	I0804 01:34:55.450382  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:55.450877  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:55.450898  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:55.451208  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:55.451380  117431 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:34:55.451609  117431 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:55.451635  117431 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:34:55.454478  117431 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:55.454903  117431 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:34:55.454923  117431 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:55.455085  117431 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:34:55.455273  117431 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:34:55.455445  117431 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:34:55.455602  117431 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	W0804 01:34:57.313701  117431 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.200:22: connect: no route to host
	W0804 01:34:57.313816  117431 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0804 01:34:57.313833  117431 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:34:57.313857  117431 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0804 01:34:57.313885  117431 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:34:57.313896  117431 status.go:255] checking status of ha-998889-m03 ...
	I0804 01:34:57.314263  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:57.314355  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:57.329469  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35389
	I0804 01:34:57.330055  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:57.330568  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:57.330594  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:57.330930  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:57.331109  117431 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:34:57.332571  117431 status.go:330] ha-998889-m03 host status = "Running" (err=<nil>)
	I0804 01:34:57.332593  117431 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:34:57.332900  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:57.332941  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:57.348075  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I0804 01:34:57.348560  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:57.349065  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:57.349096  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:57.349433  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:57.349633  117431 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:34:57.352654  117431 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:57.353102  117431 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:34:57.353157  117431 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:57.353256  117431 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:34:57.353588  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:57.353631  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:57.369755  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I0804 01:34:57.370190  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:57.370659  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:57.370683  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:57.371057  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:57.371255  117431 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:34:57.371470  117431 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:57.371494  117431 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:34:57.373979  117431 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:57.374397  117431 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:34:57.374424  117431 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:34:57.374592  117431 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:34:57.374776  117431 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:34:57.374923  117431 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:34:57.375050  117431 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:34:57.464945  117431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:57.480595  117431 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:34:57.480630  117431 api_server.go:166] Checking apiserver status ...
	I0804 01:34:57.480671  117431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:34:57.495723  117431 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0804 01:34:57.506178  117431 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:34:57.506229  117431 ssh_runner.go:195] Run: ls
	I0804 01:34:57.511546  117431 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:34:57.515808  117431 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:34:57.515834  117431 status.go:422] ha-998889-m03 apiserver status = Running (err=<nil>)
	I0804 01:34:57.515853  117431 status.go:257] ha-998889-m03 status: &{Name:ha-998889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:34:57.515872  117431 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:34:57.516261  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:57.516305  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:57.531753  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0804 01:34:57.532238  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:57.532786  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:57.532811  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:57.533128  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:57.533369  117431 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:34:57.534991  117431 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:34:57.535007  117431 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:34:57.535315  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:57.535356  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:57.550486  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43871
	I0804 01:34:57.550960  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:57.551422  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:57.551444  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:57.551749  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:57.551921  117431 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:34:57.554687  117431 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:57.555160  117431 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:34:57.555189  117431 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:57.555340  117431 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:34:57.555628  117431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:57.555665  117431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:57.571601  117431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0804 01:34:57.572007  117431 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:57.572464  117431 main.go:141] libmachine: Using API Version  1
	I0804 01:34:57.572488  117431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:57.572788  117431 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:57.572956  117431 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:34:57.573139  117431 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:57.573166  117431 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:34:57.575699  117431 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:57.576085  117431 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:34:57.576112  117431 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:34:57.576276  117431 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:34:57.576436  117431 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:34:57.576573  117431 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:34:57.576687  117431 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:34:57.661306  117431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:57.675785  117431 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 3 (4.515530776s)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-998889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:34:59.548201  117531 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:34:59.548459  117531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:34:59.548468  117531 out.go:304] Setting ErrFile to fd 2...
	I0804 01:34:59.548472  117531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:34:59.548640  117531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:34:59.548788  117531 out.go:298] Setting JSON to false
	I0804 01:34:59.548809  117531 mustload.go:65] Loading cluster: ha-998889
	I0804 01:34:59.548936  117531 notify.go:220] Checking for updates...
	I0804 01:34:59.549178  117531 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:34:59.549192  117531 status.go:255] checking status of ha-998889 ...
	I0804 01:34:59.549589  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:59.549670  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:59.564622  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0804 01:34:59.565134  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:59.565849  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:34:59.565877  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:59.566301  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:59.566492  117531 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:34:59.568176  117531 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:34:59.568193  117531 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:34:59.568475  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:59.568517  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:59.583745  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34897
	I0804 01:34:59.584140  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:59.584644  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:34:59.584676  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:59.585018  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:59.585208  117531 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:34:59.587805  117531 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:59.588270  117531 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:34:59.588283  117531 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:59.588455  117531 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:34:59.588739  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:59.588770  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:59.603431  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I0804 01:34:59.603871  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:59.604389  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:34:59.604414  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:59.604745  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:59.604931  117531 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:34:59.605131  117531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:59.605157  117531 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:34:59.608076  117531 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:59.608459  117531 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:34:59.608500  117531 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:34:59.608674  117531 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:34:59.608902  117531 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:34:59.609225  117531 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:34:59.609426  117531 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:34:59.693246  117531 ssh_runner.go:195] Run: systemctl --version
	I0804 01:34:59.699521  117531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:34:59.715528  117531 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:34:59.715565  117531 api_server.go:166] Checking apiserver status ...
	I0804 01:34:59.715612  117531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:34:59.730205  117531 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0804 01:34:59.741185  117531 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:34:59.741279  117531 ssh_runner.go:195] Run: ls
	I0804 01:34:59.747058  117531 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:34:59.751495  117531 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:34:59.751524  117531 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:34:59.751536  117531 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:34:59.751559  117531 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:34:59.751967  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:59.752012  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:59.767666  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34387
	I0804 01:34:59.768090  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:59.768607  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:34:59.768636  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:59.768965  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:59.769162  117531 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:34:59.770799  117531 status.go:330] ha-998889-m02 host status = "Running" (err=<nil>)
	I0804 01:34:59.770819  117531 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:34:59.771129  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:59.771168  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:59.786719  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39509
	I0804 01:34:59.787184  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:59.787684  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:34:59.787710  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:59.788028  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:59.788230  117531 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:34:59.790819  117531 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:59.791177  117531 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:34:59.791210  117531 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:59.791371  117531 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:34:59.791714  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:34:59.791753  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:34:59.807254  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
	I0804 01:34:59.807721  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:34:59.808194  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:34:59.808212  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:34:59.808510  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:34:59.808685  117531 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:34:59.808922  117531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:34:59.808948  117531 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:34:59.811934  117531 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:59.812408  117531 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:34:59.812431  117531 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:34:59.812590  117531 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:34:59.812759  117531 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:34:59.812950  117531 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:34:59.813106  117531 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	W0804 01:35:00.385644  117531 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:00.385712  117531 retry.go:31] will retry after 187.95887ms: dial tcp 192.168.39.200:22: connect: no route to host
	W0804 01:35:03.649649  117531 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.200:22: connect: no route to host
	W0804 01:35:03.649738  117531 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0804 01:35:03.649755  117531 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:03.649763  117531 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0804 01:35:03.649789  117531 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:03.649796  117531 status.go:255] checking status of ha-998889-m03 ...
	I0804 01:35:03.650176  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:03.650238  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:03.666036  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36305
	I0804 01:35:03.666486  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:03.667054  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:35:03.667088  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:03.667453  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:03.667699  117531 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:35:03.669516  117531 status.go:330] ha-998889-m03 host status = "Running" (err=<nil>)
	I0804 01:35:03.669542  117531 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:03.669922  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:03.669967  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:03.685249  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41345
	I0804 01:35:03.685756  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:03.686273  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:35:03.686298  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:03.686623  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:03.686812  117531 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:35:03.689986  117531 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:03.690411  117531 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:03.690446  117531 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:03.690599  117531 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:03.690947  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:03.690984  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:03.706546  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
	I0804 01:35:03.706999  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:03.707486  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:35:03.707513  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:03.707826  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:03.708048  117531 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:35:03.708265  117531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:03.708284  117531 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:35:03.710798  117531 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:03.711177  117531 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:03.711209  117531 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:03.711346  117531 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:35:03.711519  117531 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:35:03.711680  117531 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:35:03.711810  117531 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:35:03.797136  117531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:03.814293  117531 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:03.814322  117531 api_server.go:166] Checking apiserver status ...
	I0804 01:35:03.814357  117531 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:03.834027  117531 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0804 01:35:03.845539  117531 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:03.845611  117531 ssh_runner.go:195] Run: ls
	I0804 01:35:03.851018  117531 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:03.859715  117531 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:03.859750  117531 status.go:422] ha-998889-m03 apiserver status = Running (err=<nil>)
	I0804 01:35:03.859764  117531 status.go:257] ha-998889-m03 status: &{Name:ha-998889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:03.859785  117531 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:35:03.860261  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:03.860316  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:03.875485  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42749
	I0804 01:35:03.875970  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:03.876462  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:35:03.876484  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:03.876796  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:03.876982  117531 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:35:03.878418  117531 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:35:03.878439  117531 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:03.878852  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:03.878901  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:03.893907  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I0804 01:35:03.894375  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:03.894860  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:35:03.894883  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:03.895194  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:03.895358  117531 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:35:03.898033  117531 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:03.898472  117531 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:03.898498  117531 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:03.898661  117531 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:03.899050  117531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:03.899093  117531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:03.915555  117531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0804 01:35:03.915948  117531 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:03.916449  117531 main.go:141] libmachine: Using API Version  1
	I0804 01:35:03.916475  117531 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:03.916767  117531 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:03.917001  117531 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:35:03.917222  117531 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:03.917244  117531 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:35:03.920181  117531 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:03.920599  117531 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:03.920637  117531 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:03.920755  117531 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:35:03.920938  117531 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:35:03.921070  117531 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:35:03.921226  117531 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:35:04.004796  117531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:04.019541  117531 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 3 (4.21889378s)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-998889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:35:06.136865  117630 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:35:06.136975  117630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:06.136983  117630 out.go:304] Setting ErrFile to fd 2...
	I0804 01:35:06.136987  117630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:06.137149  117630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:35:06.137323  117630 out.go:298] Setting JSON to false
	I0804 01:35:06.137347  117630 mustload.go:65] Loading cluster: ha-998889
	I0804 01:35:06.137456  117630 notify.go:220] Checking for updates...
	I0804 01:35:06.137725  117630 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:35:06.137739  117630 status.go:255] checking status of ha-998889 ...
	I0804 01:35:06.138152  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:06.138216  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:06.158130  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38255
	I0804 01:35:06.158613  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:06.159299  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:06.159323  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:06.159863  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:06.160216  117630 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:35:06.162299  117630 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:35:06.162321  117630 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:06.162656  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:06.162692  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:06.178836  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37937
	I0804 01:35:06.179295  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:06.179745  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:06.179768  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:06.180084  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:06.180292  117630 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:35:06.182802  117630 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:06.183233  117630 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:06.183259  117630 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:06.183391  117630 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:06.183676  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:06.183711  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:06.198663  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0804 01:35:06.199077  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:06.199494  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:06.199515  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:06.199853  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:06.200049  117630 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:35:06.200301  117630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:06.200328  117630 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:35:06.202763  117630 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:06.203193  117630 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:06.203233  117630 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:06.203377  117630 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:35:06.203571  117630 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:35:06.203740  117630 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:35:06.203915  117630 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:35:06.285285  117630 ssh_runner.go:195] Run: systemctl --version
	I0804 01:35:06.292571  117630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:06.307080  117630 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:06.307112  117630 api_server.go:166] Checking apiserver status ...
	I0804 01:35:06.307147  117630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:06.321517  117630 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0804 01:35:06.330953  117630 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:06.331043  117630 ssh_runner.go:195] Run: ls
	I0804 01:35:06.335514  117630 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:06.342229  117630 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:06.342255  117630 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:35:06.342266  117630 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:06.342283  117630 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:35:06.342609  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:06.342644  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:06.358696  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I0804 01:35:06.359169  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:06.359650  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:06.359675  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:06.360070  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:06.360275  117630 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:35:06.362009  117630 status.go:330] ha-998889-m02 host status = "Running" (err=<nil>)
	I0804 01:35:06.362027  117630 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:35:06.362397  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:06.362444  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:06.377477  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0804 01:35:06.377973  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:06.378494  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:06.378519  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:06.378800  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:06.378996  117630 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:35:06.381715  117630 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:06.382149  117630 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:35:06.382183  117630 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:06.382367  117630 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:35:06.382664  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:06.382700  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:06.397711  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35859
	I0804 01:35:06.398182  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:06.398649  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:06.398668  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:06.399011  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:06.399181  117630 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:35:06.399373  117630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:06.399396  117630 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:35:06.402087  117630 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:06.402463  117630 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:35:06.402486  117630 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:06.402703  117630 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:35:06.402912  117630 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:35:06.403084  117630 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:35:06.403289  117630 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	W0804 01:35:06.721690  117630 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:06.721749  117630 retry.go:31] will retry after 166.115561ms: dial tcp 192.168.39.200:22: connect: no route to host
	W0804 01:35:09.953622  117630 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.200:22: connect: no route to host
	W0804 01:35:09.953725  117630 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0804 01:35:09.953770  117630 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:09.953785  117630 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0804 01:35:09.953823  117630 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:09.953836  117630 status.go:255] checking status of ha-998889-m03 ...
	I0804 01:35:09.954330  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:09.954387  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:09.969315  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38867
	I0804 01:35:09.969780  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:09.970260  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:09.970284  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:09.970651  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:09.970836  117630 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:35:09.972311  117630 status.go:330] ha-998889-m03 host status = "Running" (err=<nil>)
	I0804 01:35:09.972326  117630 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:09.972605  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:09.972646  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:09.987188  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34491
	I0804 01:35:09.987622  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:09.988135  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:09.988166  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:09.988479  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:09.988683  117630 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:35:09.991563  117630 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:09.991943  117630 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:09.991961  117630 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:09.992123  117630 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:09.992414  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:09.992450  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:10.007051  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
	I0804 01:35:10.007527  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:10.008043  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:10.008070  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:10.008352  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:10.008539  117630 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:35:10.008758  117630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:10.008781  117630 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:35:10.011566  117630 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:10.012015  117630 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:10.012043  117630 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:10.012200  117630 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:35:10.012355  117630 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:35:10.012457  117630 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:35:10.012557  117630 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:35:10.101237  117630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:10.118813  117630 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:10.118849  117630 api_server.go:166] Checking apiserver status ...
	I0804 01:35:10.118908  117630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:10.133828  117630 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0804 01:35:10.145453  117630 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:10.145522  117630 ssh_runner.go:195] Run: ls
	I0804 01:35:10.150393  117630 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:10.156621  117630 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:10.156652  117630 status.go:422] ha-998889-m03 apiserver status = Running (err=<nil>)
	I0804 01:35:10.156660  117630 status.go:257] ha-998889-m03 status: &{Name:ha-998889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:10.156678  117630 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:35:10.157050  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:10.157098  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:10.174339  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43815
	I0804 01:35:10.174769  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:10.175318  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:10.175343  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:10.175683  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:10.175933  117630 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:35:10.177603  117630 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:35:10.177625  117630 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:10.177934  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:10.178007  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:10.193153  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I0804 01:35:10.193572  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:10.194057  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:10.194078  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:10.194404  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:10.194586  117630 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:35:10.197327  117630 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:10.197754  117630 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:10.197796  117630 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:10.197911  117630 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:10.198202  117630 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:10.198242  117630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:10.213194  117630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39617
	I0804 01:35:10.213725  117630 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:10.214242  117630 main.go:141] libmachine: Using API Version  1
	I0804 01:35:10.214266  117630 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:10.214565  117630 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:10.214781  117630 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:35:10.214953  117630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:10.214977  117630 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:35:10.217649  117630 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:10.218005  117630 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:10.218040  117630 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:10.218131  117630 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:35:10.218308  117630 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:35:10.218444  117630 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:35:10.218599  117630 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:35:10.300616  117630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:10.314242  117630 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 3 (3.762761018s)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-998889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:35:12.810485  117730 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:35:12.810715  117730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:12.810723  117730 out.go:304] Setting ErrFile to fd 2...
	I0804 01:35:12.810728  117730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:12.810942  117730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:35:12.811137  117730 out.go:298] Setting JSON to false
	I0804 01:35:12.811163  117730 mustload.go:65] Loading cluster: ha-998889
	I0804 01:35:12.811274  117730 notify.go:220] Checking for updates...
	I0804 01:35:12.811623  117730 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:35:12.811641  117730 status.go:255] checking status of ha-998889 ...
	I0804 01:35:12.812145  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:12.812207  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:12.827599  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I0804 01:35:12.828127  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:12.828771  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:12.828799  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:12.829235  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:12.829452  117730 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:35:12.831138  117730 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:35:12.831164  117730 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:12.831502  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:12.831545  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:12.849201  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0804 01:35:12.849666  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:12.850201  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:12.850232  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:12.850577  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:12.850796  117730 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:35:12.853613  117730 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:12.854090  117730 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:12.854126  117730 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:12.854267  117730 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:12.854563  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:12.854600  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:12.870332  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0804 01:35:12.870811  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:12.871399  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:12.871427  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:12.871767  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:12.871977  117730 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:35:12.872184  117730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:12.872205  117730 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:35:12.875138  117730 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:12.875595  117730 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:12.875625  117730 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:12.875787  117730 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:35:12.875972  117730 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:35:12.876339  117730 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:35:12.876526  117730 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:35:12.967724  117730 ssh_runner.go:195] Run: systemctl --version
	I0804 01:35:12.975624  117730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:12.993296  117730 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:12.993326  117730 api_server.go:166] Checking apiserver status ...
	I0804 01:35:12.993386  117730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:13.010609  117730 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0804 01:35:13.022070  117730 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:13.022167  117730 ssh_runner.go:195] Run: ls
	I0804 01:35:13.029917  117730 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:13.035881  117730 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:13.035910  117730 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:35:13.035920  117730 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:13.035936  117730 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:35:13.036320  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:13.036366  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:13.052085  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I0804 01:35:13.052532  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:13.052987  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:13.053010  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:13.053492  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:13.053688  117730 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:35:13.055316  117730 status.go:330] ha-998889-m02 host status = "Running" (err=<nil>)
	I0804 01:35:13.055333  117730 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:35:13.055610  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:13.055641  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:13.072155  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33715
	I0804 01:35:13.072571  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:13.073039  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:13.073061  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:13.073403  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:13.073590  117730 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:35:13.076496  117730 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:13.076890  117730 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:35:13.076919  117730 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:13.077018  117730 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:35:13.077511  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:13.077564  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:13.092974  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46241
	I0804 01:35:13.093425  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:13.093902  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:13.093926  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:13.094375  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:13.094560  117730 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:35:13.094771  117730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:13.094795  117730 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:35:13.097921  117730 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:13.098354  117730 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:35:13.098372  117730 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:13.098543  117730 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:35:13.098722  117730 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:35:13.098870  117730 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:35:13.099020  117730 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	W0804 01:35:16.161625  117730 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.200:22: connect: no route to host
	W0804 01:35:16.161740  117730 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0804 01:35:16.161758  117730 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:16.161767  117730 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0804 01:35:16.161797  117730 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:16.161805  117730 status.go:255] checking status of ha-998889-m03 ...
	I0804 01:35:16.162180  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:16.162230  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:16.179279  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0804 01:35:16.179762  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:16.180342  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:16.180368  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:16.180710  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:16.180922  117730 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:35:16.182604  117730 status.go:330] ha-998889-m03 host status = "Running" (err=<nil>)
	I0804 01:35:16.182626  117730 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:16.182933  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:16.182966  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:16.198223  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37065
	I0804 01:35:16.198663  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:16.199158  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:16.199184  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:16.199514  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:16.199726  117730 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:35:16.202128  117730 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:16.202590  117730 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:16.202619  117730 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:16.202729  117730 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:16.203047  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:16.203090  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:16.218079  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44309
	I0804 01:35:16.218468  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:16.218916  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:16.218939  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:16.219214  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:16.219398  117730 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:35:16.219566  117730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:16.219585  117730 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:35:16.222343  117730 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:16.222768  117730 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:16.222804  117730 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:16.222893  117730 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:35:16.223083  117730 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:35:16.223238  117730 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:35:16.223414  117730 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:35:16.309540  117730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:16.325655  117730 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:16.325687  117730 api_server.go:166] Checking apiserver status ...
	I0804 01:35:16.325736  117730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:16.340875  117730 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0804 01:35:16.352220  117730 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:16.352288  117730 ssh_runner.go:195] Run: ls
	I0804 01:35:16.357681  117730 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:16.364745  117730 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:16.364775  117730 status.go:422] ha-998889-m03 apiserver status = Running (err=<nil>)
	I0804 01:35:16.364786  117730 status.go:257] ha-998889-m03 status: &{Name:ha-998889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:16.364812  117730 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:35:16.365183  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:16.365223  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:16.380709  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0804 01:35:16.381277  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:16.381826  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:16.381871  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:16.382224  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:16.382440  117730 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:35:16.383958  117730 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:35:16.383974  117730 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:16.384261  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:16.384301  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:16.401125  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40909
	I0804 01:35:16.401617  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:16.402106  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:16.402128  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:16.402504  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:16.402714  117730 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:35:16.405858  117730 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:16.406362  117730 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:16.406385  117730 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:16.406611  117730 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:16.406967  117730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:16.407044  117730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:16.421918  117730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37167
	I0804 01:35:16.422379  117730 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:16.422927  117730 main.go:141] libmachine: Using API Version  1
	I0804 01:35:16.422954  117730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:16.423297  117730 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:16.423495  117730 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:35:16.423701  117730 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:16.423722  117730 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:35:16.426223  117730 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:16.426638  117730 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:16.426664  117730 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:16.426783  117730 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:35:16.426926  117730 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:35:16.427084  117730 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:35:16.427213  117730 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:35:16.512967  117730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:16.528174  117730 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 3 (3.753894787s)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-998889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:35:20.910433  117848 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:35:20.910710  117848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:20.910721  117848 out.go:304] Setting ErrFile to fd 2...
	I0804 01:35:20.910725  117848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:20.910897  117848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:35:20.911054  117848 out.go:298] Setting JSON to false
	I0804 01:35:20.911082  117848 mustload.go:65] Loading cluster: ha-998889
	I0804 01:35:20.911134  117848 notify.go:220] Checking for updates...
	I0804 01:35:20.911484  117848 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:35:20.911502  117848 status.go:255] checking status of ha-998889 ...
	I0804 01:35:20.911882  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:20.911939  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:20.927601  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0804 01:35:20.928016  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:20.928581  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:20.928604  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:20.929044  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:20.929324  117848 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:35:20.930917  117848 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:35:20.930935  117848 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:20.931290  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:20.931347  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:20.946048  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40329
	I0804 01:35:20.946443  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:20.946916  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:20.946937  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:20.947259  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:20.947496  117848 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:35:20.950138  117848 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:20.950545  117848 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:20.950570  117848 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:20.950707  117848 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:20.951009  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:20.951065  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:20.965585  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0804 01:35:20.966075  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:20.966560  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:20.966597  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:20.966907  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:20.967069  117848 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:35:20.967287  117848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:20.967315  117848 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:35:20.969966  117848 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:20.970317  117848 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:20.970337  117848 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:20.970482  117848 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:35:20.970661  117848 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:35:20.970812  117848 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:35:20.970992  117848 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:35:21.057338  117848 ssh_runner.go:195] Run: systemctl --version
	I0804 01:35:21.067144  117848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:21.083045  117848 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:21.083073  117848 api_server.go:166] Checking apiserver status ...
	I0804 01:35:21.083105  117848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:21.101061  117848 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0804 01:35:21.113893  117848 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:21.113971  117848 ssh_runner.go:195] Run: ls
	I0804 01:35:21.119067  117848 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:21.125591  117848 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:21.125624  117848 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:35:21.125638  117848 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:21.125664  117848 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:35:21.126001  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:21.126038  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:21.141940  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39117
	I0804 01:35:21.142396  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:21.142879  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:21.142899  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:21.143206  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:21.143403  117848 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:35:21.145019  117848 status.go:330] ha-998889-m02 host status = "Running" (err=<nil>)
	I0804 01:35:21.145037  117848 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:35:21.145434  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:21.145475  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:21.161402  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0804 01:35:21.161970  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:21.162512  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:21.162542  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:21.162886  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:21.163065  117848 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:35:21.165590  117848 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:21.166026  117848 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:35:21.166051  117848 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:21.166202  117848 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:35:21.166499  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:21.166534  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:21.181546  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0804 01:35:21.182005  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:21.182561  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:21.182582  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:21.182868  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:21.183068  117848 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:35:21.183295  117848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:21.183321  117848 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:35:21.186015  117848 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:21.186432  117848 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:35:21.186453  117848 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:35:21.186612  117848 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:35:21.186762  117848 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:35:21.186916  117848 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:35:21.187054  117848 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	W0804 01:35:24.257584  117848 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.200:22: connect: no route to host
	W0804 01:35:24.257688  117848 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	E0804 01:35:24.257714  117848 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:24.257728  117848 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0804 01:35:24.257753  117848 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	I0804 01:35:24.257767  117848 status.go:255] checking status of ha-998889-m03 ...
	I0804 01:35:24.258229  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:24.258284  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:24.275040  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39833
	I0804 01:35:24.275535  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:24.275962  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:24.275984  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:24.276287  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:24.276500  117848 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:35:24.278159  117848 status.go:330] ha-998889-m03 host status = "Running" (err=<nil>)
	I0804 01:35:24.278175  117848 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:24.278479  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:24.278517  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:24.294141  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43137
	I0804 01:35:24.294546  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:24.295027  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:24.295047  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:24.295385  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:24.295576  117848 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:35:24.298200  117848 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:24.298561  117848 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:24.298599  117848 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:24.298758  117848 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:24.299154  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:24.299194  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:24.315328  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
	I0804 01:35:24.315739  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:24.316190  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:24.316211  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:24.316523  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:24.316731  117848 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:35:24.316927  117848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:24.316950  117848 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:35:24.319382  117848 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:24.319778  117848 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:24.319816  117848 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:24.319918  117848 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:35:24.320080  117848 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:35:24.320221  117848 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:35:24.320360  117848 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:35:24.406095  117848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:24.421627  117848 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:24.421656  117848 api_server.go:166] Checking apiserver status ...
	I0804 01:35:24.421687  117848 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:24.436098  117848 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0804 01:35:24.447828  117848 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:24.447899  117848 ssh_runner.go:195] Run: ls
	I0804 01:35:24.452768  117848 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:24.458952  117848 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:24.458989  117848 status.go:422] ha-998889-m03 apiserver status = Running (err=<nil>)
	I0804 01:35:24.459001  117848 status.go:257] ha-998889-m03 status: &{Name:ha-998889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:24.459016  117848 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:35:24.459302  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:24.459342  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:24.474339  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I0804 01:35:24.474743  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:24.475236  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:24.475263  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:24.475572  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:24.475757  117848 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:35:24.477328  117848 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:35:24.477343  117848 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:24.477695  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:24.477745  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:24.493135  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
	I0804 01:35:24.493601  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:24.494195  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:24.494223  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:24.494548  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:24.494754  117848 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:35:24.497571  117848 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:24.497963  117848 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:24.497990  117848 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:24.498115  117848 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:24.498471  117848 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:24.498510  117848 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:24.513550  117848 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40387
	I0804 01:35:24.513961  117848 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:24.514471  117848 main.go:141] libmachine: Using API Version  1
	I0804 01:35:24.514489  117848 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:24.514865  117848 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:24.515058  117848 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:35:24.515283  117848 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:24.515305  117848 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:35:24.518052  117848 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:24.518458  117848 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:24.518500  117848 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:24.518651  117848 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:35:24.518831  117848 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:35:24.518989  117848 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:35:24.519114  117848 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:35:24.605143  117848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:24.619198  117848 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 7 (634.489763ms)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-998889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:35:31.775460  117982 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:35:31.775582  117982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:31.775590  117982 out.go:304] Setting ErrFile to fd 2...
	I0804 01:35:31.775594  117982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:31.775764  117982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:35:31.775924  117982 out.go:298] Setting JSON to false
	I0804 01:35:31.775945  117982 mustload.go:65] Loading cluster: ha-998889
	I0804 01:35:31.776057  117982 notify.go:220] Checking for updates...
	I0804 01:35:31.776320  117982 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:35:31.776334  117982 status.go:255] checking status of ha-998889 ...
	I0804 01:35:31.776704  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:31.776764  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:31.797173  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38577
	I0804 01:35:31.797739  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:31.798389  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:31.798426  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:31.798860  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:31.799146  117982 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:35:31.801293  117982 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:35:31.801311  117982 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:31.801611  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:31.801647  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:31.817130  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0804 01:35:31.817715  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:31.818180  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:31.818212  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:31.818535  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:31.818720  117982 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:35:31.821913  117982 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:31.822512  117982 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:31.822549  117982 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:31.822712  117982 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:31.823018  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:31.823056  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:31.838595  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37205
	I0804 01:35:31.839011  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:31.839489  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:31.839513  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:31.839832  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:31.840029  117982 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:35:31.840219  117982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:31.840246  117982 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:35:31.842909  117982 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:31.843306  117982 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:31.843333  117982 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:31.843458  117982 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:35:31.843614  117982 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:35:31.843749  117982 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:35:31.843886  117982 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:35:31.930460  117982 ssh_runner.go:195] Run: systemctl --version
	I0804 01:35:31.936844  117982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:31.952577  117982 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:31.952609  117982 api_server.go:166] Checking apiserver status ...
	I0804 01:35:31.952657  117982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:31.967518  117982 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0804 01:35:31.977796  117982 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:31.977857  117982 ssh_runner.go:195] Run: ls
	I0804 01:35:31.982372  117982 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:31.989188  117982 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:31.989214  117982 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:35:31.989228  117982 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:31.989249  117982 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:35:31.989730  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:31.989779  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:32.004709  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I0804 01:35:32.005172  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:32.005671  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:32.005693  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:32.006014  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:32.006197  117982 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:35:32.007730  117982 status.go:330] ha-998889-m02 host status = "Stopped" (err=<nil>)
	I0804 01:35:32.007745  117982 status.go:343] host is not running, skipping remaining checks
	I0804 01:35:32.007752  117982 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:32.007773  117982 status.go:255] checking status of ha-998889-m03 ...
	I0804 01:35:32.008164  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:32.008210  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:32.023051  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I0804 01:35:32.023555  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:32.024106  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:32.024141  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:32.024434  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:32.024605  117982 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:35:32.026275  117982 status.go:330] ha-998889-m03 host status = "Running" (err=<nil>)
	I0804 01:35:32.026291  117982 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:32.026677  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:32.026734  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:32.041260  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0804 01:35:32.041658  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:32.042209  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:32.042235  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:32.042528  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:32.042706  117982 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:35:32.045550  117982 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:32.045974  117982 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:32.045998  117982 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:32.046118  117982 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:32.046422  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:32.046460  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:32.061084  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0804 01:35:32.061533  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:32.062010  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:32.062033  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:32.062372  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:32.062573  117982 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:35:32.062757  117982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:32.062777  117982 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:35:32.065309  117982 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:32.065767  117982 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:32.065795  117982 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:32.065937  117982 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:35:32.066116  117982 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:35:32.066269  117982 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:35:32.066404  117982 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:35:32.152793  117982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:32.168513  117982 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:32.168549  117982 api_server.go:166] Checking apiserver status ...
	I0804 01:35:32.168601  117982 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:32.181917  117982 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0804 01:35:32.192456  117982 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:32.192520  117982 ssh_runner.go:195] Run: ls
	I0804 01:35:32.197404  117982 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:32.202595  117982 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:32.202625  117982 status.go:422] ha-998889-m03 apiserver status = Running (err=<nil>)
	I0804 01:35:32.202638  117982 status.go:257] ha-998889-m03 status: &{Name:ha-998889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:32.202657  117982 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:35:32.202985  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:32.203033  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:32.220142  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43655
	I0804 01:35:32.220566  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:32.221032  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:32.221056  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:32.221410  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:32.221649  117982 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:35:32.223135  117982 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:35:32.223161  117982 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:32.223460  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:32.223494  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:32.239311  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36715
	I0804 01:35:32.239688  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:32.240162  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:32.240183  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:32.240462  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:32.240689  117982 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:35:32.243568  117982 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:32.244002  117982 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:32.244049  117982 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:32.244117  117982 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:32.244432  117982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:32.244482  117982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:32.259339  117982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0804 01:35:32.259714  117982 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:32.260175  117982 main.go:141] libmachine: Using API Version  1
	I0804 01:35:32.260204  117982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:32.260492  117982 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:32.260647  117982 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:35:32.260809  117982 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:32.260830  117982 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:35:32.263663  117982 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:32.264011  117982 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:32.264030  117982 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:32.264187  117982 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:35:32.264372  117982 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:35:32.264490  117982 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:35:32.264683  117982 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:35:32.350288  117982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:32.365078  117982 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 7 (634.704404ms)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-998889-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:35:45.483773  118103 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:35:45.484036  118103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:45.484046  118103 out.go:304] Setting ErrFile to fd 2...
	I0804 01:35:45.484050  118103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:45.484220  118103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:35:45.484387  118103 out.go:298] Setting JSON to false
	I0804 01:35:45.484412  118103 mustload.go:65] Loading cluster: ha-998889
	I0804 01:35:45.484459  118103 notify.go:220] Checking for updates...
	I0804 01:35:45.484936  118103 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:35:45.484957  118103 status.go:255] checking status of ha-998889 ...
	I0804 01:35:45.485493  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.485573  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.500730  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45729
	I0804 01:35:45.501248  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.501868  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.501924  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.502317  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.502478  118103 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:35:45.504144  118103 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:35:45.504176  118103 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:45.504475  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.504514  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.519243  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0804 01:35:45.519640  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.520130  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.520151  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.520478  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.520678  118103 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:35:45.523450  118103 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:45.523882  118103 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:45.523905  118103 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:45.524193  118103 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:35:45.524518  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.524563  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.539192  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0804 01:35:45.539723  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.540288  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.540321  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.540652  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.540858  118103 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:35:45.541086  118103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:45.541112  118103 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:35:45.543893  118103 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:45.544288  118103 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:35:45.544318  118103 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:35:45.544442  118103 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:35:45.544582  118103 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:35:45.544733  118103 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:35:45.544849  118103 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:35:45.629529  118103 ssh_runner.go:195] Run: systemctl --version
	I0804 01:35:45.636457  118103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:45.653148  118103 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:45.653177  118103 api_server.go:166] Checking apiserver status ...
	I0804 01:35:45.653210  118103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:45.668387  118103 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0804 01:35:45.678433  118103 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:45.678500  118103 ssh_runner.go:195] Run: ls
	I0804 01:35:45.682963  118103 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:45.687314  118103 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:45.687338  118103 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:35:45.687349  118103 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:45.687367  118103 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:35:45.687750  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.687801  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.703758  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41141
	I0804 01:35:45.704281  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.704787  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.704810  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.705122  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.705316  118103 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:35:45.706980  118103 status.go:330] ha-998889-m02 host status = "Stopped" (err=<nil>)
	I0804 01:35:45.706993  118103 status.go:343] host is not running, skipping remaining checks
	I0804 01:35:45.707009  118103 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:45.707024  118103 status.go:255] checking status of ha-998889-m03 ...
	I0804 01:35:45.707307  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.707340  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.721866  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44947
	I0804 01:35:45.722311  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.722782  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.722801  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.723155  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.723369  118103 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:35:45.724821  118103 status.go:330] ha-998889-m03 host status = "Running" (err=<nil>)
	I0804 01:35:45.724841  118103 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:45.725297  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.725345  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.741639  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0804 01:35:45.742158  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.742627  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.742651  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.742984  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.743185  118103 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:35:45.745749  118103 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:45.746172  118103 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:45.746197  118103 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:45.746349  118103 host.go:66] Checking if "ha-998889-m03" exists ...
	I0804 01:35:45.746676  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.746714  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.762277  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
	I0804 01:35:45.762699  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.763185  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.763207  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.763514  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.763666  118103 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:35:45.763879  118103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:45.763897  118103 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:35:45.766310  118103 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:45.766695  118103 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:45.766727  118103 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:45.766850  118103 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:35:45.767021  118103 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:35:45.767196  118103 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:35:45.767351  118103 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:35:45.853905  118103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:45.870507  118103 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:35:45.870539  118103 api_server.go:166] Checking apiserver status ...
	I0804 01:35:45.870580  118103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:35:45.890809  118103 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup
	W0804 01:35:45.901676  118103 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1488/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:35:45.901754  118103 ssh_runner.go:195] Run: ls
	I0804 01:35:45.906706  118103 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:35:45.912604  118103 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:35:45.912626  118103 status.go:422] ha-998889-m03 apiserver status = Running (err=<nil>)
	I0804 01:35:45.912635  118103 status.go:257] ha-998889-m03 status: &{Name:ha-998889-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:35:45.912650  118103 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:35:45.912990  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.913022  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.927825  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0804 01:35:45.928212  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.928704  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.928728  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.929024  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.929242  118103 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:35:45.930796  118103 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:35:45.930817  118103 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:45.931128  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.931170  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.945631  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0804 01:35:45.946018  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.946468  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.946488  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.946780  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.946957  118103 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:35:45.949504  118103 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:45.949923  118103 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:45.949950  118103 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:45.950058  118103 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:35:45.950455  118103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:45.950497  118103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:45.964987  118103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34097
	I0804 01:35:45.965401  118103 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:45.965945  118103 main.go:141] libmachine: Using API Version  1
	I0804 01:35:45.965969  118103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:45.966292  118103 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:45.966498  118103 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:35:45.966670  118103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:35:45.966693  118103 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:35:45.969418  118103 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:45.969819  118103 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:45.969848  118103 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:45.969962  118103 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:35:45.970115  118103 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:35:45.970268  118103 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:35:45.970383  118103 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:35:46.057856  118103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:35:46.073053  118103 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-998889 -n ha-998889
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-998889 logs -n 25: (1.437837745s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889:/home/docker/cp-test_ha-998889-m03_ha-998889.txt                       |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889 sudo cat                                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889.txt                                 |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m02:/home/docker/cp-test_ha-998889-m03_ha-998889-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m02 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04:/home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m04 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp testdata/cp-test.txt                                                | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1256674419/001/cp-test_ha-998889-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889:/home/docker/cp-test_ha-998889-m04_ha-998889.txt                       |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889 sudo cat                                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889.txt                                 |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m02:/home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m02 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03:/home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m03 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-998889 node stop m02 -v=7                                                     | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-998889 node start m02 -v=7                                                    | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 01:27:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 01:27:34.034390  112472 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:27:34.034628  112472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:27:34.034636  112472 out.go:304] Setting ErrFile to fd 2...
	I0804 01:27:34.034640  112472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:27:34.034808  112472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:27:34.035375  112472 out.go:298] Setting JSON to false
	I0804 01:27:34.036213  112472 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11398,"bootTime":1722723456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 01:27:34.036272  112472 start.go:139] virtualization: kvm guest
	I0804 01:27:34.038622  112472 out.go:177] * [ha-998889] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 01:27:34.039992  112472 notify.go:220] Checking for updates...
	I0804 01:27:34.039997  112472 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 01:27:34.041501  112472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 01:27:34.042842  112472 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:27:34.044303  112472 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:27:34.045687  112472 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 01:27:34.047131  112472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 01:27:34.048733  112472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 01:27:34.085326  112472 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 01:27:34.086720  112472 start.go:297] selected driver: kvm2
	I0804 01:27:34.086738  112472 start.go:901] validating driver "kvm2" against <nil>
	I0804 01:27:34.086749  112472 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 01:27:34.087453  112472 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 01:27:34.087532  112472 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 01:27:34.102852  112472 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 01:27:34.102915  112472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 01:27:34.103181  112472 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 01:27:34.103294  112472 cni.go:84] Creating CNI manager for ""
	I0804 01:27:34.103310  112472 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0804 01:27:34.103321  112472 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0804 01:27:34.103396  112472 start.go:340] cluster config:
	{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0804 01:27:34.103534  112472 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 01:27:34.105404  112472 out.go:177] * Starting "ha-998889" primary control-plane node in "ha-998889" cluster
	I0804 01:27:34.106666  112472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:27:34.106700  112472 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 01:27:34.106710  112472 cache.go:56] Caching tarball of preloaded images
	I0804 01:27:34.106791  112472 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 01:27:34.106809  112472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 01:27:34.107104  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:27:34.107123  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json: {Name:mkf33ef6ad14f588f0aced43adb897e0932e1149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:27:34.107254  112472 start.go:360] acquireMachinesLock for ha-998889: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 01:27:34.107280  112472 start.go:364] duration metric: took 14.445µs to acquireMachinesLock for "ha-998889"
	I0804 01:27:34.107296  112472 start.go:93] Provisioning new machine with config: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:27:34.107350  112472 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 01:27:34.109010  112472 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 01:27:34.109166  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:27:34.109212  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:27:34.123648  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0804 01:27:34.124111  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:27:34.124657  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:27:34.124688  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:27:34.125044  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:27:34.125269  112472 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:27:34.125439  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:34.125594  112472 start.go:159] libmachine.API.Create for "ha-998889" (driver="kvm2")
	I0804 01:27:34.125626  112472 client.go:168] LocalClient.Create starting
	I0804 01:27:34.125657  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem
	I0804 01:27:34.125688  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:27:34.125710  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:27:34.125765  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem
	I0804 01:27:34.125790  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:27:34.125803  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:27:34.125819  112472 main.go:141] libmachine: Running pre-create checks...
	I0804 01:27:34.125827  112472 main.go:141] libmachine: (ha-998889) Calling .PreCreateCheck
	I0804 01:27:34.126164  112472 main.go:141] libmachine: (ha-998889) Calling .GetConfigRaw
	I0804 01:27:34.126551  112472 main.go:141] libmachine: Creating machine...
	I0804 01:27:34.126565  112472 main.go:141] libmachine: (ha-998889) Calling .Create
	I0804 01:27:34.126711  112472 main.go:141] libmachine: (ha-998889) Creating KVM machine...
	I0804 01:27:34.128025  112472 main.go:141] libmachine: (ha-998889) DBG | found existing default KVM network
	I0804 01:27:34.128678  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.128531  112496 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0804 01:27:34.128711  112472 main.go:141] libmachine: (ha-998889) DBG | created network xml: 
	I0804 01:27:34.128741  112472 main.go:141] libmachine: (ha-998889) DBG | <network>
	I0804 01:27:34.128754  112472 main.go:141] libmachine: (ha-998889) DBG |   <name>mk-ha-998889</name>
	I0804 01:27:34.128764  112472 main.go:141] libmachine: (ha-998889) DBG |   <dns enable='no'/>
	I0804 01:27:34.128771  112472 main.go:141] libmachine: (ha-998889) DBG |   
	I0804 01:27:34.128780  112472 main.go:141] libmachine: (ha-998889) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0804 01:27:34.128798  112472 main.go:141] libmachine: (ha-998889) DBG |     <dhcp>
	I0804 01:27:34.128812  112472 main.go:141] libmachine: (ha-998889) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0804 01:27:34.128822  112472 main.go:141] libmachine: (ha-998889) DBG |     </dhcp>
	I0804 01:27:34.128829  112472 main.go:141] libmachine: (ha-998889) DBG |   </ip>
	I0804 01:27:34.128835  112472 main.go:141] libmachine: (ha-998889) DBG |   
	I0804 01:27:34.128842  112472 main.go:141] libmachine: (ha-998889) DBG | </network>
	I0804 01:27:34.128851  112472 main.go:141] libmachine: (ha-998889) DBG | 
	I0804 01:27:34.133686  112472 main.go:141] libmachine: (ha-998889) DBG | trying to create private KVM network mk-ha-998889 192.168.39.0/24...
	I0804 01:27:34.200185  112472 main.go:141] libmachine: (ha-998889) Setting up store path in /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889 ...
	I0804 01:27:34.200212  112472 main.go:141] libmachine: (ha-998889) DBG | private KVM network mk-ha-998889 192.168.39.0/24 created
	I0804 01:27:34.200223  112472 main.go:141] libmachine: (ha-998889) Building disk image from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 01:27:34.200261  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.200108  112496 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:27:34.200297  112472 main.go:141] libmachine: (ha-998889) Downloading /home/jenkins/minikube-integration/19364-90243/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 01:27:34.476534  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.476353  112496 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa...
	I0804 01:27:34.626294  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.626120  112496 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/ha-998889.rawdisk...
	I0804 01:27:34.626331  112472 main.go:141] libmachine: (ha-998889) DBG | Writing magic tar header
	I0804 01:27:34.626373  112472 main.go:141] libmachine: (ha-998889) DBG | Writing SSH key tar header
	I0804 01:27:34.626402  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:34.626283  112496 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889 ...
	I0804 01:27:34.626416  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889 (perms=drwx------)
	I0804 01:27:34.626434  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889
	I0804 01:27:34.626445  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines
	I0804 01:27:34.626452  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines (perms=drwxr-xr-x)
	I0804 01:27:34.626461  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:27:34.626473  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243
	I0804 01:27:34.626485  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 01:27:34.626496  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube (perms=drwxr-xr-x)
	I0804 01:27:34.626509  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home/jenkins
	I0804 01:27:34.626520  112472 main.go:141] libmachine: (ha-998889) DBG | Checking permissions on dir: /home
	I0804 01:27:34.626533  112472 main.go:141] libmachine: (ha-998889) DBG | Skipping /home - not owner
	I0804 01:27:34.626543  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243 (perms=drwxrwxr-x)
	I0804 01:27:34.626551  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 01:27:34.626556  112472 main.go:141] libmachine: (ha-998889) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 01:27:34.626565  112472 main.go:141] libmachine: (ha-998889) Creating domain...
	I0804 01:27:34.627790  112472 main.go:141] libmachine: (ha-998889) define libvirt domain using xml: 
	I0804 01:27:34.627809  112472 main.go:141] libmachine: (ha-998889) <domain type='kvm'>
	I0804 01:27:34.627816  112472 main.go:141] libmachine: (ha-998889)   <name>ha-998889</name>
	I0804 01:27:34.627825  112472 main.go:141] libmachine: (ha-998889)   <memory unit='MiB'>2200</memory>
	I0804 01:27:34.627830  112472 main.go:141] libmachine: (ha-998889)   <vcpu>2</vcpu>
	I0804 01:27:34.627840  112472 main.go:141] libmachine: (ha-998889)   <features>
	I0804 01:27:34.627846  112472 main.go:141] libmachine: (ha-998889)     <acpi/>
	I0804 01:27:34.627852  112472 main.go:141] libmachine: (ha-998889)     <apic/>
	I0804 01:27:34.627860  112472 main.go:141] libmachine: (ha-998889)     <pae/>
	I0804 01:27:34.627868  112472 main.go:141] libmachine: (ha-998889)     
	I0804 01:27:34.627897  112472 main.go:141] libmachine: (ha-998889)   </features>
	I0804 01:27:34.627904  112472 main.go:141] libmachine: (ha-998889)   <cpu mode='host-passthrough'>
	I0804 01:27:34.627929  112472 main.go:141] libmachine: (ha-998889)   
	I0804 01:27:34.627952  112472 main.go:141] libmachine: (ha-998889)   </cpu>
	I0804 01:27:34.627961  112472 main.go:141] libmachine: (ha-998889)   <os>
	I0804 01:27:34.627974  112472 main.go:141] libmachine: (ha-998889)     <type>hvm</type>
	I0804 01:27:34.627995  112472 main.go:141] libmachine: (ha-998889)     <boot dev='cdrom'/>
	I0804 01:27:34.628013  112472 main.go:141] libmachine: (ha-998889)     <boot dev='hd'/>
	I0804 01:27:34.628022  112472 main.go:141] libmachine: (ha-998889)     <bootmenu enable='no'/>
	I0804 01:27:34.628029  112472 main.go:141] libmachine: (ha-998889)   </os>
	I0804 01:27:34.628037  112472 main.go:141] libmachine: (ha-998889)   <devices>
	I0804 01:27:34.628048  112472 main.go:141] libmachine: (ha-998889)     <disk type='file' device='cdrom'>
	I0804 01:27:34.628064  112472 main.go:141] libmachine: (ha-998889)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/boot2docker.iso'/>
	I0804 01:27:34.628102  112472 main.go:141] libmachine: (ha-998889)       <target dev='hdc' bus='scsi'/>
	I0804 01:27:34.628118  112472 main.go:141] libmachine: (ha-998889)       <readonly/>
	I0804 01:27:34.628128  112472 main.go:141] libmachine: (ha-998889)     </disk>
	I0804 01:27:34.628138  112472 main.go:141] libmachine: (ha-998889)     <disk type='file' device='disk'>
	I0804 01:27:34.628152  112472 main.go:141] libmachine: (ha-998889)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 01:27:34.628174  112472 main.go:141] libmachine: (ha-998889)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/ha-998889.rawdisk'/>
	I0804 01:27:34.628184  112472 main.go:141] libmachine: (ha-998889)       <target dev='hda' bus='virtio'/>
	I0804 01:27:34.628190  112472 main.go:141] libmachine: (ha-998889)     </disk>
	I0804 01:27:34.628199  112472 main.go:141] libmachine: (ha-998889)     <interface type='network'>
	I0804 01:27:34.628208  112472 main.go:141] libmachine: (ha-998889)       <source network='mk-ha-998889'/>
	I0804 01:27:34.628213  112472 main.go:141] libmachine: (ha-998889)       <model type='virtio'/>
	I0804 01:27:34.628218  112472 main.go:141] libmachine: (ha-998889)     </interface>
	I0804 01:27:34.628224  112472 main.go:141] libmachine: (ha-998889)     <interface type='network'>
	I0804 01:27:34.628233  112472 main.go:141] libmachine: (ha-998889)       <source network='default'/>
	I0804 01:27:34.628245  112472 main.go:141] libmachine: (ha-998889)       <model type='virtio'/>
	I0804 01:27:34.628265  112472 main.go:141] libmachine: (ha-998889)     </interface>
	I0804 01:27:34.628295  112472 main.go:141] libmachine: (ha-998889)     <serial type='pty'>
	I0804 01:27:34.628320  112472 main.go:141] libmachine: (ha-998889)       <target port='0'/>
	I0804 01:27:34.628334  112472 main.go:141] libmachine: (ha-998889)     </serial>
	I0804 01:27:34.628342  112472 main.go:141] libmachine: (ha-998889)     <console type='pty'>
	I0804 01:27:34.628357  112472 main.go:141] libmachine: (ha-998889)       <target type='serial' port='0'/>
	I0804 01:27:34.628387  112472 main.go:141] libmachine: (ha-998889)     </console>
	I0804 01:27:34.628398  112472 main.go:141] libmachine: (ha-998889)     <rng model='virtio'>
	I0804 01:27:34.628411  112472 main.go:141] libmachine: (ha-998889)       <backend model='random'>/dev/random</backend>
	I0804 01:27:34.628427  112472 main.go:141] libmachine: (ha-998889)     </rng>
	I0804 01:27:34.628438  112472 main.go:141] libmachine: (ha-998889)     
	I0804 01:27:34.628445  112472 main.go:141] libmachine: (ha-998889)     
	I0804 01:27:34.628456  112472 main.go:141] libmachine: (ha-998889)   </devices>
	I0804 01:27:34.628465  112472 main.go:141] libmachine: (ha-998889) </domain>
	I0804 01:27:34.628476  112472 main.go:141] libmachine: (ha-998889) 
	I0804 01:27:34.634476  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:a4:06:fd in network default
	I0804 01:27:34.635130  112472 main.go:141] libmachine: (ha-998889) Ensuring networks are active...
	I0804 01:27:34.635154  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:34.635863  112472 main.go:141] libmachine: (ha-998889) Ensuring network default is active
	I0804 01:27:34.636220  112472 main.go:141] libmachine: (ha-998889) Ensuring network mk-ha-998889 is active
	I0804 01:27:34.636687  112472 main.go:141] libmachine: (ha-998889) Getting domain xml...
	I0804 01:27:34.637514  112472 main.go:141] libmachine: (ha-998889) Creating domain...
	I0804 01:27:35.817970  112472 main.go:141] libmachine: (ha-998889) Waiting to get IP...
	I0804 01:27:35.818833  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:35.819223  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:35.819283  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:35.819224  112496 retry.go:31] will retry after 296.598754ms: waiting for machine to come up
	I0804 01:27:36.117830  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:36.118300  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:36.118325  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:36.118261  112496 retry.go:31] will retry after 256.62577ms: waiting for machine to come up
	I0804 01:27:36.376733  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:36.377268  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:36.377297  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:36.377194  112496 retry.go:31] will retry after 355.609942ms: waiting for machine to come up
	I0804 01:27:36.734884  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:36.735340  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:36.735366  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:36.735294  112496 retry.go:31] will retry after 478.320401ms: waiting for machine to come up
	I0804 01:27:37.214721  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:37.215102  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:37.215159  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:37.215057  112496 retry.go:31] will retry after 567.406004ms: waiting for machine to come up
	I0804 01:27:37.783807  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:37.784250  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:37.784279  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:37.784204  112496 retry.go:31] will retry after 758.01729ms: waiting for machine to come up
	I0804 01:27:38.544371  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:38.544908  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:38.544944  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:38.544730  112496 retry.go:31] will retry after 823.463269ms: waiting for machine to come up
	I0804 01:27:39.369409  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:39.369811  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:39.369841  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:39.369759  112496 retry.go:31] will retry after 1.463845637s: waiting for machine to come up
	I0804 01:27:40.835396  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:40.835732  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:40.835760  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:40.835674  112496 retry.go:31] will retry after 1.816575461s: waiting for machine to come up
	I0804 01:27:42.654405  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:42.654827  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:42.654857  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:42.654774  112496 retry.go:31] will retry after 1.40027298s: waiting for machine to come up
	I0804 01:27:44.057276  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:44.057718  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:44.057744  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:44.057677  112496 retry.go:31] will retry after 2.379743455s: waiting for machine to come up
	I0804 01:27:46.439422  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:46.439732  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:46.439758  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:46.439684  112496 retry.go:31] will retry after 3.528768878s: waiting for machine to come up
	I0804 01:27:49.969771  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:49.970248  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:49.970276  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:49.970195  112496 retry.go:31] will retry after 3.073877797s: waiting for machine to come up
	I0804 01:27:53.047398  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:53.047739  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find current IP address of domain ha-998889 in network mk-ha-998889
	I0804 01:27:53.047761  112472 main.go:141] libmachine: (ha-998889) DBG | I0804 01:27:53.047682  112496 retry.go:31] will retry after 4.825115092s: waiting for machine to come up
	I0804 01:27:57.876864  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:57.877277  112472 main.go:141] libmachine: (ha-998889) Found IP for machine: 192.168.39.12
	I0804 01:27:57.877303  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has current primary IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:57.877309  112472 main.go:141] libmachine: (ha-998889) Reserving static IP address...
	I0804 01:27:57.877836  112472 main.go:141] libmachine: (ha-998889) DBG | unable to find host DHCP lease matching {name: "ha-998889", mac: "52:54:00:3a:37:c1", ip: "192.168.39.12"} in network mk-ha-998889
	I0804 01:27:57.950737  112472 main.go:141] libmachine: (ha-998889) DBG | Getting to WaitForSSH function...
	I0804 01:27:57.950766  112472 main.go:141] libmachine: (ha-998889) Reserved static IP address: 192.168.39.12
	I0804 01:27:57.950779  112472 main.go:141] libmachine: (ha-998889) Waiting for SSH to be available...
	I0804 01:27:57.953549  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:57.953969  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:57.953997  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:57.954301  112472 main.go:141] libmachine: (ha-998889) DBG | Using SSH client type: external
	I0804 01:27:57.954324  112472 main.go:141] libmachine: (ha-998889) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa (-rw-------)
	I0804 01:27:57.954367  112472 main.go:141] libmachine: (ha-998889) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 01:27:57.954386  112472 main.go:141] libmachine: (ha-998889) DBG | About to run SSH command:
	I0804 01:27:57.954402  112472 main.go:141] libmachine: (ha-998889) DBG | exit 0
	I0804 01:27:58.081404  112472 main.go:141] libmachine: (ha-998889) DBG | SSH cmd err, output: <nil>: 
	I0804 01:27:58.081657  112472 main.go:141] libmachine: (ha-998889) KVM machine creation complete!
	I0804 01:27:58.081974  112472 main.go:141] libmachine: (ha-998889) Calling .GetConfigRaw
	I0804 01:27:58.082535  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:58.082730  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:58.082964  112472 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 01:27:58.082976  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:27:58.084487  112472 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 01:27:58.084503  112472 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 01:27:58.084511  112472 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 01:27:58.084545  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.086802  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.087131  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.087155  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.087277  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.087400  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.087510  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.087654  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.087831  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.088075  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.088092  112472 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 01:27:58.196986  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:27:58.197012  112472 main.go:141] libmachine: Detecting the provisioner...
	I0804 01:27:58.197023  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.199725  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.200144  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.200174  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.200323  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.200526  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.200669  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.200790  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.200958  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.201211  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.201225  112472 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 01:27:58.310564  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 01:27:58.310645  112472 main.go:141] libmachine: found compatible host: buildroot
	I0804 01:27:58.310651  112472 main.go:141] libmachine: Provisioning with buildroot...
	I0804 01:27:58.310658  112472 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:27:58.310944  112472 buildroot.go:166] provisioning hostname "ha-998889"
	I0804 01:27:58.310976  112472 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:27:58.311169  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.313818  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.314187  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.314208  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.314413  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.314644  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.314830  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.314980  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.315179  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.315386  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.315401  112472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-998889 && echo "ha-998889" | sudo tee /etc/hostname
	I0804 01:27:58.440622  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889
	
	I0804 01:27:58.440651  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.443388  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.443772  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.443803  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.444011  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.444222  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.444377  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.444554  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.444740  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.444917  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.444933  112472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-998889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-998889/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-998889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 01:27:58.562313  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:27:58.562345  112472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 01:27:58.562385  112472 buildroot.go:174] setting up certificates
	I0804 01:27:58.562394  112472 provision.go:84] configureAuth start
	I0804 01:27:58.562403  112472 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:27:58.562700  112472 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:27:58.565414  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.565784  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.565827  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.566055  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.568162  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.568441  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.568485  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.568560  112472 provision.go:143] copyHostCerts
	I0804 01:27:58.568601  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:27:58.568635  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 01:27:58.568643  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:27:58.568706  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 01:27:58.568791  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:27:58.568811  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 01:27:58.568815  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:27:58.568839  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 01:27:58.568874  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:27:58.568888  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 01:27:58.568891  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:27:58.568916  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 01:27:58.568957  112472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.ha-998889 san=[127.0.0.1 192.168.39.12 ha-998889 localhost minikube]
	I0804 01:27:58.649203  112472 provision.go:177] copyRemoteCerts
	I0804 01:27:58.649275  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 01:27:58.649302  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.652682  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.653144  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.653168  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.653369  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.653554  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.653734  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.653902  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:27:58.739651  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 01:27:58.739722  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 01:27:58.762637  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 01:27:58.762710  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0804 01:27:58.785185  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 01:27:58.785278  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 01:27:58.807674  112472 provision.go:87] duration metric: took 245.265863ms to configureAuth
	I0804 01:27:58.807705  112472 buildroot.go:189] setting minikube options for container-runtime
	I0804 01:27:58.807885  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:27:58.807967  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:58.810489  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.810816  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:58.810846  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:58.811001  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:58.811293  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.811472  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:58.811633  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:58.811813  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:58.812018  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:58.812036  112472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 01:27:59.081281  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 01:27:59.081315  112472 main.go:141] libmachine: Checking connection to Docker...
	I0804 01:27:59.081326  112472 main.go:141] libmachine: (ha-998889) Calling .GetURL
	I0804 01:27:59.082745  112472 main.go:141] libmachine: (ha-998889) DBG | Using libvirt version 6000000
	I0804 01:27:59.084971  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.085294  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.085320  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.085480  112472 main.go:141] libmachine: Docker is up and running!
	I0804 01:27:59.085514  112472 main.go:141] libmachine: Reticulating splines...
	I0804 01:27:59.085527  112472 client.go:171] duration metric: took 24.959888572s to LocalClient.Create
	I0804 01:27:59.085561  112472 start.go:167] duration metric: took 24.95996898s to libmachine.API.Create "ha-998889"
	I0804 01:27:59.085574  112472 start.go:293] postStartSetup for "ha-998889" (driver="kvm2")
	I0804 01:27:59.085588  112472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 01:27:59.085614  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.085881  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 01:27:59.085909  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:59.087964  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.088220  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.088245  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.088406  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:59.088563  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.088717  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:59.088917  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:27:59.173983  112472 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 01:27:59.178400  112472 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 01:27:59.178430  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 01:27:59.178495  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 01:27:59.178601  112472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 01:27:59.178613  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 01:27:59.178743  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 01:27:59.190203  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:27:59.216209  112472 start.go:296] duration metric: took 130.616918ms for postStartSetup
	I0804 01:27:59.216259  112472 main.go:141] libmachine: (ha-998889) Calling .GetConfigRaw
	I0804 01:27:59.216863  112472 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:27:59.219616  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.220035  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.220056  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.220309  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:27:59.220511  112472 start.go:128] duration metric: took 25.113151184s to createHost
	I0804 01:27:59.220534  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:59.222940  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.223136  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.223167  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.223325  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:59.223491  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.223643  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.223755  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:59.223940  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:27:59.224112  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:27:59.224130  112472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 01:27:59.334253  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722734879.314213638
	
	I0804 01:27:59.334277  112472 fix.go:216] guest clock: 1722734879.314213638
	I0804 01:27:59.334284  112472 fix.go:229] Guest: 2024-08-04 01:27:59.314213638 +0000 UTC Remote: 2024-08-04 01:27:59.220523818 +0000 UTC m=+25.222386029 (delta=93.68982ms)
	I0804 01:27:59.334306  112472 fix.go:200] guest clock delta is within tolerance: 93.68982ms
	I0804 01:27:59.334311  112472 start.go:83] releasing machines lock for "ha-998889", held for 25.227022794s
	I0804 01:27:59.334328  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.334582  112472 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:27:59.337372  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.337817  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.337843  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.338000  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.338680  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.338907  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:27:59.339026  112472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 01:27:59.339068  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:59.339186  112472 ssh_runner.go:195] Run: cat /version.json
	I0804 01:27:59.339211  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:27:59.341918  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.341939  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.342330  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.342357  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.342428  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:27:59.342459  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:59.342467  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:27:59.342662  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.342676  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:27:59.342855  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:27:59.342870  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:59.343021  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:27:59.343064  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:27:59.343140  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:27:59.441857  112472 ssh_runner.go:195] Run: systemctl --version
	I0804 01:27:59.447801  112472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 01:27:59.608632  112472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 01:27:59.615401  112472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 01:27:59.615478  112472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 01:27:59.631843  112472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 01:27:59.631872  112472 start.go:495] detecting cgroup driver to use...
	I0804 01:27:59.631949  112472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 01:27:59.647341  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 01:27:59.661296  112472 docker.go:217] disabling cri-docker service (if available) ...
	I0804 01:27:59.661370  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 01:27:59.675596  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 01:27:59.689634  112472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 01:27:59.803349  112472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 01:27:59.942225  112472 docker.go:233] disabling docker service ...
	I0804 01:27:59.942310  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 01:27:59.957083  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 01:27:59.970098  112472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 01:28:00.108965  112472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 01:28:00.230198  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 01:28:00.244364  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 01:28:00.262827  112472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 01:28:00.262883  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.273379  112472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 01:28:00.273443  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.284065  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.294637  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.305280  112472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 01:28:00.316420  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.327255  112472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.344330  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:00.355505  112472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 01:28:00.366051  112472 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 01:28:00.366132  112472 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 01:28:00.379276  112472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 01:28:00.389069  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:28:00.507815  112472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 01:28:00.642273  112472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 01:28:00.642363  112472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 01:28:00.647404  112472 start.go:563] Will wait 60s for crictl version
	I0804 01:28:00.647470  112472 ssh_runner.go:195] Run: which crictl
	I0804 01:28:00.651326  112472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 01:28:00.691325  112472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 01:28:00.691405  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:28:00.719613  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:28:00.749170  112472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 01:28:00.750657  112472 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:28:00.753475  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:00.753835  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:28:00.753865  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:00.754065  112472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 01:28:00.758441  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:28:00.771564  112472 kubeadm.go:883] updating cluster {Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 01:28:00.771673  112472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:28:00.771772  112472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 01:28:00.803244  112472 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0804 01:28:00.803317  112472 ssh_runner.go:195] Run: which lz4
	I0804 01:28:00.807363  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0804 01:28:00.807453  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0804 01:28:00.811445  112472 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 01:28:00.811471  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0804 01:28:02.217727  112472 crio.go:462] duration metric: took 1.410295481s to copy over tarball
	I0804 01:28:02.217811  112472 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 01:28:04.389307  112472 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.171464178s)
	I0804 01:28:04.389337  112472 crio.go:469] duration metric: took 2.171577201s to extract the tarball
	I0804 01:28:04.389345  112472 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 01:28:04.429170  112472 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 01:28:04.482945  112472 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 01:28:04.482971  112472 cache_images.go:84] Images are preloaded, skipping loading
	I0804 01:28:04.482979  112472 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.30.3 crio true true} ...
	I0804 01:28:04.483107  112472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-998889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 01:28:04.483200  112472 ssh_runner.go:195] Run: crio config
	I0804 01:28:04.532700  112472 cni.go:84] Creating CNI manager for ""
	I0804 01:28:04.532721  112472 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0804 01:28:04.532733  112472 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 01:28:04.532756  112472 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-998889 NodeName:ha-998889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 01:28:04.532953  112472 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-998889"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 01:28:04.532995  112472 kube-vip.go:115] generating kube-vip config ...
	I0804 01:28:04.533045  112472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 01:28:04.552308  112472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 01:28:04.552441  112472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0804 01:28:04.552507  112472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 01:28:04.563501  112472 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 01:28:04.563592  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0804 01:28:04.573610  112472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0804 01:28:04.590467  112472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 01:28:04.607300  112472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0804 01:28:04.624655  112472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0804 01:28:04.641481  112472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 01:28:04.645541  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:28:04.658825  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:28:04.796838  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:28:04.815145  112472 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889 for IP: 192.168.39.12
	I0804 01:28:04.815182  112472 certs.go:194] generating shared ca certs ...
	I0804 01:28:04.815204  112472 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:04.815403  112472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 01:28:04.815446  112472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 01:28:04.815456  112472 certs.go:256] generating profile certs ...
	I0804 01:28:04.815511  112472 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key
	I0804 01:28:04.815530  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt with IP's: []
	I0804 01:28:04.940009  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt ...
	I0804 01:28:04.940038  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt: {Name:mk79fa1e4ae1118cf8f8c0c19ef697182e8e9377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:04.940226  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key ...
	I0804 01:28:04.940240  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key: {Name:mkf7d9a24b1ec2627891807d54c289d2bfd23b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:04.940316  112472 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.0fad81cc
	I0804 01:28:04.940331  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.0fad81cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.254]
	I0804 01:28:05.009427  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.0fad81cc ...
	I0804 01:28:05.009456  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.0fad81cc: {Name:mk86e869e2e67e118d26f58ab0277fe9fca1ae8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:05.009611  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.0fad81cc ...
	I0804 01:28:05.009626  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.0fad81cc: {Name:mkc1460bc2d558f3afc3fb170f119d6e0e4da2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:05.009695  112472 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.0fad81cc -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt
	I0804 01:28:05.009786  112472 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.0fad81cc -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key
	I0804 01:28:05.009845  112472 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key
	I0804 01:28:05.009861  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt with IP's: []
	I0804 01:28:05.178241  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt ...
	I0804 01:28:05.178275  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt: {Name:mk30715d33d423e2f3b5a89adcfd91e99c30f659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:05.178439  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key ...
	I0804 01:28:05.178449  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key: {Name:mkc8177c06a3f681ba706656a57bcbc40c783550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:05.178517  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 01:28:05.178534  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 01:28:05.178544  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 01:28:05.178558  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 01:28:05.178573  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 01:28:05.178586  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 01:28:05.178599  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 01:28:05.178608  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 01:28:05.178656  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 01:28:05.178693  112472 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 01:28:05.178702  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 01:28:05.178725  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 01:28:05.178749  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 01:28:05.178769  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 01:28:05.178807  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:28:05.178839  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:05.178852  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 01:28:05.178864  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 01:28:05.179448  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 01:28:05.205893  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 01:28:05.230021  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 01:28:05.255588  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 01:28:05.280581  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 01:28:05.305073  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 01:28:05.328855  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 01:28:05.353197  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 01:28:05.378515  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 01:28:05.402783  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 01:28:05.427163  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 01:28:05.452026  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 01:28:05.469012  112472 ssh_runner.go:195] Run: openssl version
	I0804 01:28:05.475114  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 01:28:05.485908  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 01:28:05.490320  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 01:28:05.490378  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 01:28:05.496393  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 01:28:05.507321  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 01:28:05.517854  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 01:28:05.522274  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 01:28:05.522312  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 01:28:05.527830  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 01:28:05.538239  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 01:28:05.548946  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:05.553710  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:05.553782  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:05.559731  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 01:28:05.570905  112472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 01:28:05.575037  112472 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 01:28:05.575098  112472 kubeadm.go:392] StartCluster: {Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:28:05.575214  112472 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 01:28:05.575271  112472 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 01:28:05.633443  112472 cri.go:89] found id: ""
	I0804 01:28:05.633513  112472 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 01:28:05.651461  112472 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 01:28:05.670980  112472 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 01:28:05.683207  112472 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 01:28:05.683231  112472 kubeadm.go:157] found existing configuration files:
	
	I0804 01:28:05.683289  112472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 01:28:05.693330  112472 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 01:28:05.693409  112472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 01:28:05.703503  112472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 01:28:05.713494  112472 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 01:28:05.713594  112472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 01:28:05.723579  112472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 01:28:05.733641  112472 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 01:28:05.733697  112472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 01:28:05.743835  112472 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 01:28:05.753948  112472 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 01:28:05.754007  112472 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 01:28:05.764492  112472 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 01:28:05.875281  112472 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0804 01:28:05.875374  112472 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 01:28:06.001567  112472 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 01:28:06.001761  112472 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 01:28:06.001898  112472 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 01:28:06.218175  112472 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 01:28:06.397461  112472 out.go:204]   - Generating certificates and keys ...
	I0804 01:28:06.397596  112472 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 01:28:06.397670  112472 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 01:28:06.397772  112472 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 01:28:06.441750  112472 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 01:28:06.891891  112472 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 01:28:06.999877  112472 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 01:28:07.158478  112472 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 01:28:07.158751  112472 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-998889 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0804 01:28:07.336591  112472 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 01:28:07.336808  112472 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-998889 localhost] and IPs [192.168.39.12 127.0.0.1 ::1]
	I0804 01:28:07.503189  112472 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 01:28:07.724675  112472 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 01:28:08.127674  112472 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 01:28:08.127969  112472 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 01:28:08.391458  112472 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 01:28:08.511434  112472 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0804 01:28:08.701182  112472 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 01:28:08.804919  112472 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 01:28:08.956483  112472 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 01:28:08.957068  112472 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 01:28:08.959575  112472 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 01:28:08.961875  112472 out.go:204]   - Booting up control plane ...
	I0804 01:28:08.961985  112472 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 01:28:08.962077  112472 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 01:28:08.962173  112472 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 01:28:08.980748  112472 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 01:28:08.983505  112472 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 01:28:08.983585  112472 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 01:28:09.112364  112472 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0804 01:28:09.112471  112472 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0804 01:28:09.613884  112472 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.094392ms
	I0804 01:28:09.613972  112472 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0804 01:28:15.601597  112472 kubeadm.go:310] [api-check] The API server is healthy after 5.990804115s
	I0804 01:28:15.617412  112472 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0804 01:28:15.636486  112472 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0804 01:28:15.668429  112472 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0804 01:28:15.668645  112472 kubeadm.go:310] [mark-control-plane] Marking the node ha-998889 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0804 01:28:15.685753  112472 kubeadm.go:310] [bootstrap-token] Using token: 6isgoe.8x9m8twbydje2d0l
	I0804 01:28:15.687214  112472 out.go:204]   - Configuring RBAC rules ...
	I0804 01:28:15.687354  112472 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0804 01:28:15.700905  112472 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0804 01:28:15.717628  112472 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0804 01:28:15.721175  112472 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0804 01:28:15.724694  112472 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0804 01:28:15.728491  112472 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0804 01:28:16.008898  112472 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0804 01:28:16.446887  112472 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0804 01:28:17.009123  112472 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0804 01:28:17.009149  112472 kubeadm.go:310] 
	I0804 01:28:17.009213  112472 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0804 01:28:17.009221  112472 kubeadm.go:310] 
	I0804 01:28:17.009311  112472 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0804 01:28:17.009319  112472 kubeadm.go:310] 
	I0804 01:28:17.009344  112472 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0804 01:28:17.009469  112472 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0804 01:28:17.009557  112472 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0804 01:28:17.009568  112472 kubeadm.go:310] 
	I0804 01:28:17.009646  112472 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0804 01:28:17.009656  112472 kubeadm.go:310] 
	I0804 01:28:17.009745  112472 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0804 01:28:17.009761  112472 kubeadm.go:310] 
	I0804 01:28:17.009828  112472 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0804 01:28:17.009896  112472 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0804 01:28:17.009956  112472 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0804 01:28:17.009962  112472 kubeadm.go:310] 
	I0804 01:28:17.010035  112472 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0804 01:28:17.010098  112472 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0804 01:28:17.010104  112472 kubeadm.go:310] 
	I0804 01:28:17.010174  112472 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6isgoe.8x9m8twbydje2d0l \
	I0804 01:28:17.010280  112472 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e \
	I0804 01:28:17.010300  112472 kubeadm.go:310] 	--control-plane 
	I0804 01:28:17.010304  112472 kubeadm.go:310] 
	I0804 01:28:17.010371  112472 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0804 01:28:17.010377  112472 kubeadm.go:310] 
	I0804 01:28:17.010451  112472 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6isgoe.8x9m8twbydje2d0l \
	I0804 01:28:17.010535  112472 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e 
	I0804 01:28:17.011131  112472 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 01:28:17.011158  112472 cni.go:84] Creating CNI manager for ""
	I0804 01:28:17.011167  112472 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0804 01:28:17.014169  112472 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0804 01:28:17.015659  112472 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0804 01:28:17.021041  112472 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0804 01:28:17.021064  112472 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0804 01:28:17.043824  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0804 01:28:17.417299  112472 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 01:28:17.417390  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:17.417391  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-998889 minikube.k8s.io/updated_at=2024_08_04T01_28_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-998889 minikube.k8s.io/primary=true
	I0804 01:28:17.455986  112472 ops.go:34] apiserver oom_adj: -16
	I0804 01:28:17.615580  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:18.115739  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:18.616056  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:19.115979  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:19.616474  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:20.116435  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:20.615936  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:21.115963  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:21.616474  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:22.115724  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:22.616173  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:23.116602  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:23.616301  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:24.116484  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:24.616677  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:25.116304  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:25.616250  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:26.116434  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:26.615730  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:27.116005  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:27.616356  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:28.116650  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:28.616666  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:29.115952  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:29.616060  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0804 01:28:29.733039  112472 kubeadm.go:1113] duration metric: took 12.31573191s to wait for elevateKubeSystemPrivileges
	I0804 01:28:29.733085  112472 kubeadm.go:394] duration metric: took 24.157991663s to StartCluster
	I0804 01:28:29.733110  112472 settings.go:142] acquiring lock: {Name:mkf532aceb8d8524495256eb01b2b67c117281c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:29.733210  112472 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:28:29.734249  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/kubeconfig: {Name:mk9db0d5521301bbe44f571d0153ba4b675d0242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:29.734513  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0804 01:28:29.734516  112472 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:28:29.734544  112472 start.go:241] waiting for startup goroutines ...
	I0804 01:28:29.734566  112472 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 01:28:29.734646  112472 addons.go:69] Setting storage-provisioner=true in profile "ha-998889"
	I0804 01:28:29.734659  112472 addons.go:69] Setting default-storageclass=true in profile "ha-998889"
	I0804 01:28:29.734687  112472 addons.go:234] Setting addon storage-provisioner=true in "ha-998889"
	I0804 01:28:29.734706  112472 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-998889"
	I0804 01:28:29.734723  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:28:29.734739  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:28:29.735117  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.735149  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.735168  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.735182  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.750614  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37503
	I0804 01:28:29.751009  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I0804 01:28:29.751245  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.751525  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.751743  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.751763  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.752055  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.752071  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.752110  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.752386  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.752568  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:28:29.752625  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.752666  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.754809  112472 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:28:29.755181  112472 kapi.go:59] client config for ha-998889: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key", CAFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 01:28:29.755705  112472 cert_rotation.go:137] Starting client certificate rotation controller
	I0804 01:28:29.755999  112472 addons.go:234] Setting addon default-storageclass=true in "ha-998889"
	I0804 01:28:29.756055  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:28:29.756485  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.756532  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.768633  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0804 01:28:29.769157  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.769723  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.769746  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.770106  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.770309  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:28:29.771991  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:28:29.773402  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38081
	I0804 01:28:29.773795  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.773872  112472 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 01:28:29.774250  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.774273  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.774612  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.775085  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:29.775128  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:29.775208  112472 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 01:28:29.775238  112472 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 01:28:29.775259  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:28:29.778293  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:29.778671  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:28:29.778732  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:29.778826  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:28:29.779026  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:28:29.779194  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:28:29.779349  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:28:29.790238  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46387
	I0804 01:28:29.790725  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:29.791193  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:29.791217  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:29.791530  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:29.791721  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:28:29.793348  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:28:29.793620  112472 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 01:28:29.793637  112472 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 01:28:29.793657  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:28:29.796210  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:29.796581  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:28:29.796602  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:29.796779  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:28:29.796947  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:28:29.797114  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:28:29.797257  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:28:29.865785  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0804 01:28:29.958212  112472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 01:28:29.968317  112472 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 01:28:30.252653  112472 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0804 01:28:30.270098  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.270125  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.270433  112472 main.go:141] libmachine: (ha-998889) DBG | Closing plugin on server side
	I0804 01:28:30.270503  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.270524  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.270537  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.270548  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.270810  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.270824  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.270982  112472 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0804 01:28:30.270990  112472 round_trippers.go:469] Request Headers:
	I0804 01:28:30.271001  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:28:30.271007  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:28:30.278575  112472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0804 01:28:30.279171  112472 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0804 01:28:30.279186  112472 round_trippers.go:469] Request Headers:
	I0804 01:28:30.279193  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:28:30.279197  112472 round_trippers.go:473]     Content-Type: application/json
	I0804 01:28:30.279200  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:28:30.281943  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:28:30.282103  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.282113  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.282345  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.282364  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.282366  112472 main.go:141] libmachine: (ha-998889) DBG | Closing plugin on server side
	I0804 01:28:30.475645  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.475673  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.476028  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.476048  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.476057  112472 main.go:141] libmachine: Making call to close driver server
	I0804 01:28:30.476064  112472 main.go:141] libmachine: (ha-998889) Calling .Close
	I0804 01:28:30.476319  112472 main.go:141] libmachine: Successfully made call to close driver server
	I0804 01:28:30.476330  112472 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 01:28:30.478005  112472 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0804 01:28:30.479238  112472 addons.go:510] duration metric: took 744.675262ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0804 01:28:30.479272  112472 start.go:246] waiting for cluster config update ...
	I0804 01:28:30.479285  112472 start.go:255] writing updated cluster config ...
	I0804 01:28:30.480863  112472 out.go:177] 
	I0804 01:28:30.482606  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:28:30.482684  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:28:30.484303  112472 out.go:177] * Starting "ha-998889-m02" control-plane node in "ha-998889" cluster
	I0804 01:28:30.485460  112472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:28:30.485492  112472 cache.go:56] Caching tarball of preloaded images
	I0804 01:28:30.485599  112472 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 01:28:30.485624  112472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 01:28:30.485730  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:28:30.486496  112472 start.go:360] acquireMachinesLock for ha-998889-m02: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 01:28:30.486550  112472 start.go:364] duration metric: took 31.213µs to acquireMachinesLock for "ha-998889-m02"
	I0804 01:28:30.486565  112472 start.go:93] Provisioning new machine with config: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:28:30.486638  112472 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0804 01:28:30.488066  112472 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 01:28:30.488167  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:30.488208  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:30.503256  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0804 01:28:30.503667  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:30.504160  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:30.504194  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:30.504538  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:30.504781  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetMachineName
	I0804 01:28:30.505051  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:30.505230  112472 start.go:159] libmachine.API.Create for "ha-998889" (driver="kvm2")
	I0804 01:28:30.505265  112472 client.go:168] LocalClient.Create starting
	I0804 01:28:30.505305  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem
	I0804 01:28:30.505394  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:28:30.505426  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:28:30.505500  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem
	I0804 01:28:30.505528  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:28:30.505544  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:28:30.505566  112472 main.go:141] libmachine: Running pre-create checks...
	I0804 01:28:30.505577  112472 main.go:141] libmachine: (ha-998889-m02) Calling .PreCreateCheck
	I0804 01:28:30.505766  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetConfigRaw
	I0804 01:28:30.506203  112472 main.go:141] libmachine: Creating machine...
	I0804 01:28:30.506221  112472 main.go:141] libmachine: (ha-998889-m02) Calling .Create
	I0804 01:28:30.506352  112472 main.go:141] libmachine: (ha-998889-m02) Creating KVM machine...
	I0804 01:28:30.507660  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found existing default KVM network
	I0804 01:28:30.507826  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found existing private KVM network mk-ha-998889
	I0804 01:28:30.507953  112472 main.go:141] libmachine: (ha-998889-m02) Setting up store path in /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02 ...
	I0804 01:28:30.507983  112472 main.go:141] libmachine: (ha-998889-m02) Building disk image from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 01:28:30.508117  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:30.507974  112881 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:28:30.508168  112472 main.go:141] libmachine: (ha-998889-m02) Downloading /home/jenkins/minikube-integration/19364-90243/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 01:28:30.761338  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:30.761188  112881 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa...
	I0804 01:28:30.919696  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:30.919552  112881 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/ha-998889-m02.rawdisk...
	I0804 01:28:30.919735  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Writing magic tar header
	I0804 01:28:30.919751  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Writing SSH key tar header
	I0804 01:28:30.919764  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:30.919688  112881 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02 ...
	I0804 01:28:30.919871  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02
	I0804 01:28:30.919904  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02 (perms=drwx------)
	I0804 01:28:30.919919  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines
	I0804 01:28:30.919943  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines (perms=drwxr-xr-x)
	I0804 01:28:30.919962  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:28:30.919973  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube (perms=drwxr-xr-x)
	I0804 01:28:30.919989  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243 (perms=drwxrwxr-x)
	I0804 01:28:30.920002  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 01:28:30.920012  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243
	I0804 01:28:30.920027  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 01:28:30.920037  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home/jenkins
	I0804 01:28:30.920050  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Checking permissions on dir: /home
	I0804 01:28:30.920060  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Skipping /home - not owner
	I0804 01:28:30.920097  112472 main.go:141] libmachine: (ha-998889-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 01:28:30.920116  112472 main.go:141] libmachine: (ha-998889-m02) Creating domain...
	I0804 01:28:30.921063  112472 main.go:141] libmachine: (ha-998889-m02) define libvirt domain using xml: 
	I0804 01:28:30.921081  112472 main.go:141] libmachine: (ha-998889-m02) <domain type='kvm'>
	I0804 01:28:30.921091  112472 main.go:141] libmachine: (ha-998889-m02)   <name>ha-998889-m02</name>
	I0804 01:28:30.921099  112472 main.go:141] libmachine: (ha-998889-m02)   <memory unit='MiB'>2200</memory>
	I0804 01:28:30.921107  112472 main.go:141] libmachine: (ha-998889-m02)   <vcpu>2</vcpu>
	I0804 01:28:30.921113  112472 main.go:141] libmachine: (ha-998889-m02)   <features>
	I0804 01:28:30.921123  112472 main.go:141] libmachine: (ha-998889-m02)     <acpi/>
	I0804 01:28:30.921127  112472 main.go:141] libmachine: (ha-998889-m02)     <apic/>
	I0804 01:28:30.921135  112472 main.go:141] libmachine: (ha-998889-m02)     <pae/>
	I0804 01:28:30.921140  112472 main.go:141] libmachine: (ha-998889-m02)     
	I0804 01:28:30.921148  112472 main.go:141] libmachine: (ha-998889-m02)   </features>
	I0804 01:28:30.921153  112472 main.go:141] libmachine: (ha-998889-m02)   <cpu mode='host-passthrough'>
	I0804 01:28:30.921159  112472 main.go:141] libmachine: (ha-998889-m02)   
	I0804 01:28:30.921164  112472 main.go:141] libmachine: (ha-998889-m02)   </cpu>
	I0804 01:28:30.921171  112472 main.go:141] libmachine: (ha-998889-m02)   <os>
	I0804 01:28:30.921176  112472 main.go:141] libmachine: (ha-998889-m02)     <type>hvm</type>
	I0804 01:28:30.921183  112472 main.go:141] libmachine: (ha-998889-m02)     <boot dev='cdrom'/>
	I0804 01:28:30.921188  112472 main.go:141] libmachine: (ha-998889-m02)     <boot dev='hd'/>
	I0804 01:28:30.921194  112472 main.go:141] libmachine: (ha-998889-m02)     <bootmenu enable='no'/>
	I0804 01:28:30.921198  112472 main.go:141] libmachine: (ha-998889-m02)   </os>
	I0804 01:28:30.921203  112472 main.go:141] libmachine: (ha-998889-m02)   <devices>
	I0804 01:28:30.921210  112472 main.go:141] libmachine: (ha-998889-m02)     <disk type='file' device='cdrom'>
	I0804 01:28:30.921218  112472 main.go:141] libmachine: (ha-998889-m02)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/boot2docker.iso'/>
	I0804 01:28:30.921225  112472 main.go:141] libmachine: (ha-998889-m02)       <target dev='hdc' bus='scsi'/>
	I0804 01:28:30.921230  112472 main.go:141] libmachine: (ha-998889-m02)       <readonly/>
	I0804 01:28:30.921236  112472 main.go:141] libmachine: (ha-998889-m02)     </disk>
	I0804 01:28:30.921242  112472 main.go:141] libmachine: (ha-998889-m02)     <disk type='file' device='disk'>
	I0804 01:28:30.921250  112472 main.go:141] libmachine: (ha-998889-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 01:28:30.921262  112472 main.go:141] libmachine: (ha-998889-m02)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/ha-998889-m02.rawdisk'/>
	I0804 01:28:30.921269  112472 main.go:141] libmachine: (ha-998889-m02)       <target dev='hda' bus='virtio'/>
	I0804 01:28:30.921274  112472 main.go:141] libmachine: (ha-998889-m02)     </disk>
	I0804 01:28:30.921281  112472 main.go:141] libmachine: (ha-998889-m02)     <interface type='network'>
	I0804 01:28:30.921287  112472 main.go:141] libmachine: (ha-998889-m02)       <source network='mk-ha-998889'/>
	I0804 01:28:30.921294  112472 main.go:141] libmachine: (ha-998889-m02)       <model type='virtio'/>
	I0804 01:28:30.921299  112472 main.go:141] libmachine: (ha-998889-m02)     </interface>
	I0804 01:28:30.921306  112472 main.go:141] libmachine: (ha-998889-m02)     <interface type='network'>
	I0804 01:28:30.921325  112472 main.go:141] libmachine: (ha-998889-m02)       <source network='default'/>
	I0804 01:28:30.921332  112472 main.go:141] libmachine: (ha-998889-m02)       <model type='virtio'/>
	I0804 01:28:30.921337  112472 main.go:141] libmachine: (ha-998889-m02)     </interface>
	I0804 01:28:30.921343  112472 main.go:141] libmachine: (ha-998889-m02)     <serial type='pty'>
	I0804 01:28:30.921349  112472 main.go:141] libmachine: (ha-998889-m02)       <target port='0'/>
	I0804 01:28:30.921368  112472 main.go:141] libmachine: (ha-998889-m02)     </serial>
	I0804 01:28:30.921380  112472 main.go:141] libmachine: (ha-998889-m02)     <console type='pty'>
	I0804 01:28:30.921391  112472 main.go:141] libmachine: (ha-998889-m02)       <target type='serial' port='0'/>
	I0804 01:28:30.921412  112472 main.go:141] libmachine: (ha-998889-m02)     </console>
	I0804 01:28:30.921430  112472 main.go:141] libmachine: (ha-998889-m02)     <rng model='virtio'>
	I0804 01:28:30.921440  112472 main.go:141] libmachine: (ha-998889-m02)       <backend model='random'>/dev/random</backend>
	I0804 01:28:30.921445  112472 main.go:141] libmachine: (ha-998889-m02)     </rng>
	I0804 01:28:30.921450  112472 main.go:141] libmachine: (ha-998889-m02)     
	I0804 01:28:30.921456  112472 main.go:141] libmachine: (ha-998889-m02)     
	I0804 01:28:30.921461  112472 main.go:141] libmachine: (ha-998889-m02)   </devices>
	I0804 01:28:30.921466  112472 main.go:141] libmachine: (ha-998889-m02) </domain>
	I0804 01:28:30.921495  112472 main.go:141] libmachine: (ha-998889-m02) 
	I0804 01:28:30.929778  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:15:1a:27 in network default
	I0804 01:28:30.930433  112472 main.go:141] libmachine: (ha-998889-m02) Ensuring networks are active...
	I0804 01:28:30.930454  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:30.931330  112472 main.go:141] libmachine: (ha-998889-m02) Ensuring network default is active
	I0804 01:28:30.931670  112472 main.go:141] libmachine: (ha-998889-m02) Ensuring network mk-ha-998889 is active
	I0804 01:28:30.932110  112472 main.go:141] libmachine: (ha-998889-m02) Getting domain xml...
	I0804 01:28:30.933052  112472 main.go:141] libmachine: (ha-998889-m02) Creating domain...
	I0804 01:28:32.149109  112472 main.go:141] libmachine: (ha-998889-m02) Waiting to get IP...
	I0804 01:28:32.150031  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:32.150399  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:32.150455  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:32.150381  112881 retry.go:31] will retry after 268.179165ms: waiting for machine to come up
	I0804 01:28:32.419905  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:32.420328  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:32.420372  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:32.420289  112881 retry.go:31] will retry after 367.807233ms: waiting for machine to come up
	I0804 01:28:32.790173  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:32.790611  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:32.790644  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:32.790569  112881 retry.go:31] will retry after 425.29844ms: waiting for machine to come up
	I0804 01:28:33.217193  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:33.217673  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:33.217701  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:33.217622  112881 retry.go:31] will retry after 456.348174ms: waiting for machine to come up
	I0804 01:28:33.675237  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:33.675694  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:33.675719  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:33.675643  112881 retry.go:31] will retry after 744.6172ms: waiting for machine to come up
	I0804 01:28:34.421724  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:34.422221  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:34.422255  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:34.422180  112881 retry.go:31] will retry after 953.022328ms: waiting for machine to come up
	I0804 01:28:35.377632  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:35.378080  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:35.378120  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:35.378025  112881 retry.go:31] will retry after 727.937271ms: waiting for machine to come up
	I0804 01:28:36.107712  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:36.108227  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:36.108268  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:36.108150  112881 retry.go:31] will retry after 1.033849143s: waiting for machine to come up
	I0804 01:28:37.143498  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:37.143943  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:37.143962  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:37.143922  112881 retry.go:31] will retry after 1.350606885s: waiting for machine to come up
	I0804 01:28:38.495904  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:38.496349  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:38.496367  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:38.496308  112881 retry.go:31] will retry after 1.90273357s: waiting for machine to come up
	I0804 01:28:40.401125  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:40.401637  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:40.401670  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:40.401581  112881 retry.go:31] will retry after 2.647896385s: waiting for machine to come up
	I0804 01:28:43.052964  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:43.053480  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:43.053511  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:43.053422  112881 retry.go:31] will retry after 2.25124518s: waiting for machine to come up
	I0804 01:28:45.307295  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:45.307695  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:45.307730  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:45.307650  112881 retry.go:31] will retry after 4.396427726s: waiting for machine to come up
	I0804 01:28:49.706546  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:49.706941  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find current IP address of domain ha-998889-m02 in network mk-ha-998889
	I0804 01:28:49.706985  112472 main.go:141] libmachine: (ha-998889-m02) DBG | I0804 01:28:49.706909  112881 retry.go:31] will retry after 4.887319809s: waiting for machine to come up
	I0804 01:28:54.595364  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.595847  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has current primary IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.595873  112472 main.go:141] libmachine: (ha-998889-m02) Found IP for machine: 192.168.39.200
	I0804 01:28:54.595892  112472 main.go:141] libmachine: (ha-998889-m02) Reserving static IP address...
	I0804 01:28:54.596265  112472 main.go:141] libmachine: (ha-998889-m02) DBG | unable to find host DHCP lease matching {name: "ha-998889-m02", mac: "52:54:00:bf:26:17", ip: "192.168.39.200"} in network mk-ha-998889
	I0804 01:28:54.669966  112472 main.go:141] libmachine: (ha-998889-m02) Reserved static IP address: 192.168.39.200
	I0804 01:28:54.670001  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Getting to WaitForSSH function...
	I0804 01:28:54.670011  112472 main.go:141] libmachine: (ha-998889-m02) Waiting for SSH to be available...
	I0804 01:28:54.672968  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.673435  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:54.673465  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.673571  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Using SSH client type: external
	I0804 01:28:54.673596  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa (-rw-------)
	I0804 01:28:54.673631  112472 main.go:141] libmachine: (ha-998889-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 01:28:54.673644  112472 main.go:141] libmachine: (ha-998889-m02) DBG | About to run SSH command:
	I0804 01:28:54.673661  112472 main.go:141] libmachine: (ha-998889-m02) DBG | exit 0
	I0804 01:28:54.801760  112472 main.go:141] libmachine: (ha-998889-m02) DBG | SSH cmd err, output: <nil>: 
	I0804 01:28:54.802063  112472 main.go:141] libmachine: (ha-998889-m02) KVM machine creation complete!
	I0804 01:28:54.802368  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetConfigRaw
	I0804 01:28:54.802882  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:54.803073  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:54.803244  112472 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 01:28:54.803257  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:28:54.804651  112472 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 01:28:54.804672  112472 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 01:28:54.804678  112472 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 01:28:54.804684  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:54.807078  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.807437  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:54.807464  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.807584  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:54.807763  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:54.807893  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:54.808025  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:54.808217  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:54.808418  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:54.808429  112472 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 01:28:54.916476  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:28:54.916504  112472 main.go:141] libmachine: Detecting the provisioner...
	I0804 01:28:54.916512  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:54.919614  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.920107  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:54.920132  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:54.920376  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:54.920594  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:54.920750  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:54.920911  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:54.921127  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:54.921409  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:54.921427  112472 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 01:28:55.026395  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 01:28:55.026504  112472 main.go:141] libmachine: found compatible host: buildroot
	I0804 01:28:55.026517  112472 main.go:141] libmachine: Provisioning with buildroot...
	I0804 01:28:55.026530  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetMachineName
	I0804 01:28:55.026852  112472 buildroot.go:166] provisioning hostname "ha-998889-m02"
	I0804 01:28:55.026884  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetMachineName
	I0804 01:28:55.027051  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.030120  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.030560  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.030590  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.030755  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.030985  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.031160  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.031338  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.031502  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:55.031702  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:55.031718  112472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-998889-m02 && echo "ha-998889-m02" | sudo tee /etc/hostname
	I0804 01:28:55.153923  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889-m02
	
	I0804 01:28:55.153955  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.156619  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.156986  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.157029  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.157243  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.157477  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.157651  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.157767  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.157911  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:55.158137  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:55.158154  112472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-998889-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-998889-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-998889-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 01:28:55.277469  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:28:55.277508  112472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 01:28:55.277527  112472 buildroot.go:174] setting up certificates
	I0804 01:28:55.277539  112472 provision.go:84] configureAuth start
	I0804 01:28:55.277553  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetMachineName
	I0804 01:28:55.277902  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:28:55.280624  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.281054  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.281079  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.281327  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.283605  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.283962  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.283992  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.284117  112472 provision.go:143] copyHostCerts
	I0804 01:28:55.284151  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:28:55.284207  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 01:28:55.284217  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:28:55.284282  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 01:28:55.284369  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:28:55.284386  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 01:28:55.284393  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:28:55.284416  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 01:28:55.284506  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:28:55.284527  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 01:28:55.284531  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:28:55.284556  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 01:28:55.284616  112472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.ha-998889-m02 san=[127.0.0.1 192.168.39.200 ha-998889-m02 localhost minikube]
	I0804 01:28:55.370416  112472 provision.go:177] copyRemoteCerts
	I0804 01:28:55.370480  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 01:28:55.370506  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.373305  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.373706  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.373740  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.373908  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.374089  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.374214  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.374334  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:28:55.455658  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 01:28:55.455756  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 01:28:55.481778  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 01:28:55.481879  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 01:28:55.505846  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 01:28:55.505919  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 01:28:55.530149  112472 provision.go:87] duration metric: took 252.586948ms to configureAuth
	I0804 01:28:55.530186  112472 buildroot.go:189] setting minikube options for container-runtime
	I0804 01:28:55.530406  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:28:55.530556  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.533389  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.533826  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.533857  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.534022  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.534248  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.534388  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.534569  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.534765  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:55.534982  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:55.535004  112472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 01:28:55.805013  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 01:28:55.805045  112472 main.go:141] libmachine: Checking connection to Docker...
	I0804 01:28:55.805053  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetURL
	I0804 01:28:55.806487  112472 main.go:141] libmachine: (ha-998889-m02) DBG | Using libvirt version 6000000
	I0804 01:28:55.808907  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.809254  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.809275  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.809474  112472 main.go:141] libmachine: Docker is up and running!
	I0804 01:28:55.809493  112472 main.go:141] libmachine: Reticulating splines...
	I0804 01:28:55.809502  112472 client.go:171] duration metric: took 25.304226093s to LocalClient.Create
	I0804 01:28:55.809533  112472 start.go:167] duration metric: took 25.304304839s to libmachine.API.Create "ha-998889"
	I0804 01:28:55.809545  112472 start.go:293] postStartSetup for "ha-998889-m02" (driver="kvm2")
	I0804 01:28:55.809558  112472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 01:28:55.809592  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:55.809860  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 01:28:55.809886  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.811927  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.812234  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.812260  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.812385  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.812597  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.812759  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.812937  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:28:55.896262  112472 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 01:28:55.901088  112472 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 01:28:55.901113  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 01:28:55.901189  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 01:28:55.901292  112472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 01:28:55.901307  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 01:28:55.901437  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 01:28:55.911162  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:28:55.935681  112472 start.go:296] duration metric: took 126.119459ms for postStartSetup
	I0804 01:28:55.935742  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetConfigRaw
	I0804 01:28:55.936546  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:28:55.939881  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.940391  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.940422  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.940670  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:28:55.940907  112472 start.go:128] duration metric: took 25.454257234s to createHost
	I0804 01:28:55.940935  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:55.943420  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.943758  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:55.943783  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:55.943962  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:55.944144  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.944349  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:55.944531  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:55.944700  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:28:55.944900  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0804 01:28:55.944914  112472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 01:28:56.050260  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722734936.026904772
	
	I0804 01:28:56.050285  112472 fix.go:216] guest clock: 1722734936.026904772
	I0804 01:28:56.050296  112472 fix.go:229] Guest: 2024-08-04 01:28:56.026904772 +0000 UTC Remote: 2024-08-04 01:28:55.94092076 +0000 UTC m=+81.942782970 (delta=85.984012ms)
	I0804 01:28:56.050317  112472 fix.go:200] guest clock delta is within tolerance: 85.984012ms
	I0804 01:28:56.050324  112472 start.go:83] releasing machines lock for "ha-998889-m02", held for 25.563767731s
	I0804 01:28:56.050350  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:56.050643  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:28:56.053141  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.053574  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:56.053596  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.055845  112472 out.go:177] * Found network options:
	I0804 01:28:56.057415  112472 out.go:177]   - NO_PROXY=192.168.39.12
	W0804 01:28:56.058561  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 01:28:56.058602  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:56.059197  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:56.059409  112472 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:28:56.059516  112472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 01:28:56.059557  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	W0804 01:28:56.059633  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 01:28:56.059717  112472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 01:28:56.059744  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:28:56.062277  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.062338  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.062590  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:56.062609  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.062632  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:56.062648  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:56.062810  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:56.062982  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:28:56.063073  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:56.063148  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:28:56.063210  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:56.063287  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:28:56.063342  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:28:56.063396  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:28:56.304368  112472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 01:28:56.311725  112472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 01:28:56.311804  112472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 01:28:56.328673  112472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 01:28:56.328701  112472 start.go:495] detecting cgroup driver to use...
	I0804 01:28:56.328768  112472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 01:28:56.346593  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 01:28:56.362206  112472 docker.go:217] disabling cri-docker service (if available) ...
	I0804 01:28:56.362264  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 01:28:56.376930  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 01:28:56.391727  112472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 01:28:56.519492  112472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 01:28:56.680072  112472 docker.go:233] disabling docker service ...
	I0804 01:28:56.680171  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 01:28:56.695362  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 01:28:56.709491  112472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 01:28:56.829866  112472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 01:28:56.947379  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 01:28:56.961963  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 01:28:56.980015  112472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 01:28:56.980086  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:56.991285  112472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 01:28:56.991362  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.003712  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.015998  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.029215  112472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 01:28:57.041461  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.052536  112472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.070434  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:28:57.081642  112472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 01:28:57.091874  112472 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 01:28:57.091931  112472 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 01:28:57.106309  112472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 01:28:57.116586  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:28:57.240378  112472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 01:28:57.374852  112472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 01:28:57.374944  112472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 01:28:57.380338  112472 start.go:563] Will wait 60s for crictl version
	I0804 01:28:57.380413  112472 ssh_runner.go:195] Run: which crictl
	I0804 01:28:57.384825  112472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 01:28:57.426828  112472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 01:28:57.426926  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:28:57.455982  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:28:57.485984  112472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 01:28:57.487486  112472 out.go:177]   - env NO_PROXY=192.168.39.12
	I0804 01:28:57.488688  112472 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:28:57.491091  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:57.491401  112472 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:28:45 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:28:57.491429  112472 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:28:57.491581  112472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 01:28:57.495938  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:28:57.508732  112472 mustload.go:65] Loading cluster: ha-998889
	I0804 01:28:57.508983  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:28:57.509252  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:57.509302  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:57.524594  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45191
	I0804 01:28:57.525539  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:57.526011  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:57.526031  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:57.526386  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:57.526592  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:28:57.528097  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:28:57.528435  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:28:57.528491  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:28:57.544362  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0804 01:28:57.544824  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:28:57.545302  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:28:57.545327  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:28:57.545694  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:28:57.545959  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:28:57.546205  112472 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889 for IP: 192.168.39.200
	I0804 01:28:57.546218  112472 certs.go:194] generating shared ca certs ...
	I0804 01:28:57.546233  112472 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:57.546371  112472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 01:28:57.546412  112472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 01:28:57.546422  112472 certs.go:256] generating profile certs ...
	I0804 01:28:57.546483  112472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key
	I0804 01:28:57.546510  112472 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cef94706
	I0804 01:28:57.546524  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cef94706 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.200 192.168.39.254]
	I0804 01:28:57.952681  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cef94706 ...
	I0804 01:28:57.952711  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cef94706: {Name:mk16aa54dedad4e240fa220451742f589cf5420b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:57.952910  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cef94706 ...
	I0804 01:28:57.952928  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cef94706: {Name:mkb647fef86cc95a64e2aca9905e764b6b7263b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:28:57.953036  112472 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cef94706 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt
	I0804 01:28:57.953171  112472 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cef94706 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key
	I0804 01:28:57.953302  112472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key
	I0804 01:28:57.953322  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 01:28:57.953336  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 01:28:57.953350  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 01:28:57.953408  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 01:28:57.953423  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 01:28:57.953436  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 01:28:57.953450  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 01:28:57.953469  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 01:28:57.953536  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 01:28:57.953576  112472 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 01:28:57.953586  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 01:28:57.953619  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 01:28:57.953663  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 01:28:57.953695  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 01:28:57.953747  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:28:57.953791  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 01:28:57.953815  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:57.953841  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 01:28:57.953885  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:28:57.956832  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:57.957242  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:28:57.957268  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:28:57.957428  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:28:57.957638  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:28:57.957820  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:28:57.957952  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:28:58.033777  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0804 01:28:58.038767  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0804 01:28:58.050460  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0804 01:28:58.055757  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0804 01:28:58.066499  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0804 01:28:58.071462  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0804 01:28:58.082123  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0804 01:28:58.086434  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0804 01:28:58.097739  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0804 01:28:58.103591  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0804 01:28:58.115120  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0804 01:28:58.120728  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0804 01:28:58.132224  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 01:28:58.169844  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 01:28:58.193921  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 01:28:58.217944  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 01:28:58.241393  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0804 01:28:58.266903  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 01:28:58.291393  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 01:28:58.315927  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 01:28:58.340516  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 01:28:58.366622  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 01:28:58.392075  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 01:28:58.416933  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0804 01:28:58.435506  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0804 01:28:58.452323  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0804 01:28:58.469418  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0804 01:28:58.485933  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0804 01:28:58.502667  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0804 01:28:58.519181  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0804 01:28:58.535798  112472 ssh_runner.go:195] Run: openssl version
	I0804 01:28:58.541695  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 01:28:58.552808  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:58.557427  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:58.557490  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:28:58.563334  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 01:28:58.574153  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 01:28:58.585290  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 01:28:58.590009  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 01:28:58.590102  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 01:28:58.596387  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 01:28:58.608361  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 01:28:58.619806  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 01:28:58.624852  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 01:28:58.624943  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 01:28:58.630829  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 01:28:58.642877  112472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 01:28:58.647833  112472 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 01:28:58.647884  112472 kubeadm.go:934] updating node {m02 192.168.39.200 8443 v1.30.3 crio true true} ...
	I0804 01:28:58.647985  112472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-998889-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 01:28:58.648017  112472 kube-vip.go:115] generating kube-vip config ...
	I0804 01:28:58.648059  112472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 01:28:58.668536  112472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 01:28:58.668613  112472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0804 01:28:58.668669  112472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 01:28:58.680647  112472 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0804 01:28:58.680724  112472 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0804 01:28:58.692441  112472 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0804 01:28:58.692470  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 01:28:58.692523  112472 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0804 01:28:58.692546  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 01:28:58.692523  112472 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0804 01:28:58.697122  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0804 01:28:58.697156  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0804 01:28:59.596488  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 01:28:59.596576  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 01:28:59.601893  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0804 01:28:59.601925  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0804 01:28:59.886375  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:28:59.902133  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 01:28:59.902257  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 01:28:59.906920  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0804 01:28:59.906962  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0804 01:29:00.313229  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0804 01:29:00.322995  112472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0804 01:29:00.340463  112472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 01:29:00.357581  112472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0804 01:29:00.374959  112472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 01:29:00.378987  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:29:00.392030  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:29:00.512075  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:29:00.530566  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:29:00.530967  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:29:00.531013  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:29:00.546784  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0804 01:29:00.547284  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:29:00.547808  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:29:00.547838  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:29:00.548162  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:29:00.548413  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:29:00.548612  112472 start.go:317] joinCluster: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:29:00.548708  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0804 01:29:00.548728  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:29:00.551822  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:29:00.552246  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:29:00.552273  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:29:00.552439  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:29:00.552637  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:29:00.552823  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:29:00.552993  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:29:00.710931  112472 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:29:00.710981  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tyjh8y.fzi76243575sf4so --discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-998889-m02 --control-plane --apiserver-advertise-address=192.168.39.200 --apiserver-bind-port=8443"
	I0804 01:29:23.405264  112472 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tyjh8y.fzi76243575sf4so --discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-998889-m02 --control-plane --apiserver-advertise-address=192.168.39.200 --apiserver-bind-port=8443": (22.694253426s)
	I0804 01:29:23.405319  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0804 01:29:23.849202  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-998889-m02 minikube.k8s.io/updated_at=2024_08_04T01_29_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-998889 minikube.k8s.io/primary=false
	I0804 01:29:23.995444  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-998889-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0804 01:29:24.121433  112472 start.go:319] duration metric: took 23.57281924s to joinCluster
	I0804 01:29:24.121519  112472 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:29:24.121802  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:29:24.123008  112472 out.go:177] * Verifying Kubernetes components...
	I0804 01:29:24.124632  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:29:24.388677  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:29:24.426816  112472 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:29:24.427177  112472 kapi.go:59] client config for ha-998889: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key", CAFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0804 01:29:24.427264  112472 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.12:8443
	I0804 01:29:24.427539  112472 node_ready.go:35] waiting up to 6m0s for node "ha-998889-m02" to be "Ready" ...
	I0804 01:29:24.427664  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:24.427674  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:24.427683  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:24.427691  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:24.437163  112472 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0804 01:29:24.928231  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:24.928255  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:24.928267  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:24.928272  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:24.934840  112472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 01:29:25.427906  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:25.427932  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:25.427942  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:25.427947  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:25.431712  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:25.927883  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:25.927912  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:25.927923  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:25.927928  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:25.931522  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:26.428381  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:26.428403  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:26.428411  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:26.428415  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:26.431959  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:26.432693  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:26.928178  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:26.928209  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:26.928221  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:26.928228  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:26.931324  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:27.428382  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:27.428403  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:27.428412  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:27.428415  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:27.432110  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:27.927922  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:27.927948  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:27.927960  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:27.927966  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:27.932659  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:28.428674  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:28.428704  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:28.428716  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:28.428724  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:28.431795  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:28.927849  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:28.927877  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:28.927889  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:28.927897  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:28.931369  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:28.932010  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:29.427829  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:29.427852  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:29.427860  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:29.427864  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:29.431026  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:29.928621  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:29.928649  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:29.928659  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:29.928663  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:29.931784  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:30.428495  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:30.428517  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:30.428525  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:30.428530  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:30.432464  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:30.928589  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:30.928613  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:30.928624  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:30.928631  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:30.932205  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:30.932830  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:31.428234  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:31.428258  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:31.428267  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:31.428272  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:31.432086  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:31.928358  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:31.928381  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:31.928389  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:31.928393  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:31.932119  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:32.428407  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:32.428431  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:32.428438  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:32.428444  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:32.431769  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:32.928581  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:32.928603  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:32.928613  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:32.928617  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:32.931484  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:33.428480  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:33.428510  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:33.428519  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:33.428524  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:33.432714  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:33.433679  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:33.927920  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:33.927943  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:33.927951  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:33.927956  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:33.931628  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:34.428301  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:34.428324  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:34.428332  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:34.428337  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:34.431417  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:34.928406  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:34.928430  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:34.928438  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:34.928442  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:34.931855  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:35.427982  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:35.428003  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:35.428012  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:35.428016  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:35.431540  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:35.928493  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:35.928519  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:35.928530  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:35.928537  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:35.934468  112472 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0804 01:29:35.934963  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:36.428378  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:36.428400  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:36.428408  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:36.428412  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:36.431770  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:36.928354  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:36.928388  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:36.928399  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:36.928407  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:36.931884  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:37.427908  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:37.427933  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:37.427945  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:37.427951  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:37.431474  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:37.928435  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:37.928459  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:37.928466  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:37.928471  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:37.931675  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:38.428714  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:38.428738  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:38.428747  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:38.428752  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:38.432416  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:38.433151  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:38.928608  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:38.928630  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:38.928638  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:38.928642  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:38.932111  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:39.428762  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:39.428786  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:39.428795  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:39.428798  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:39.431928  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:39.928167  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:39.928193  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:39.928204  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:39.928209  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:39.931592  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:40.428226  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:40.428252  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:40.428263  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:40.428268  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:40.432080  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:40.928413  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:40.928444  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:40.928456  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:40.928462  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:40.931754  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:40.932508  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:41.427798  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:41.427820  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:41.427829  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:41.427834  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:41.432113  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:41.927998  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:41.928024  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:41.928035  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:41.928047  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:41.931169  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:42.428720  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:42.428747  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:42.428755  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:42.428759  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:42.432450  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:42.928530  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:42.928555  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:42.928564  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:42.928567  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:42.932425  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:42.933343  112472 node_ready.go:53] node "ha-998889-m02" has status "Ready":"False"
	I0804 01:29:43.428719  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:43.428743  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.428751  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.428755  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.432609  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.433240  112472 node_ready.go:49] node "ha-998889-m02" has status "Ready":"True"
	I0804 01:29:43.433261  112472 node_ready.go:38] duration metric: took 19.005699575s for node "ha-998889-m02" to be "Ready" ...
	I0804 01:29:43.433270  112472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 01:29:43.433335  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:43.433345  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.433368  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.433378  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.438356  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:43.444542  112472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.444639  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b8ds7
	I0804 01:29:43.444649  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.444656  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.444661  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.448123  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.449177  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.449192  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.449198  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.449204  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.452230  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.453218  112472 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.453235  112472 pod_ready.go:81] duration metric: took 8.66995ms for pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.453243  112472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.453288  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ddb5m
	I0804 01:29:43.453295  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.453301  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.453305  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.456144  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:43.456672  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.456689  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.456696  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.456701  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.460353  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.461231  112472 pod_ready.go:92] pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.461247  112472 pod_ready.go:81] duration metric: took 7.997864ms for pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.461256  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.461302  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889
	I0804 01:29:43.461310  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.461317  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.461321  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.463901  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:43.464364  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.464379  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.464385  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.464388  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.467359  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:43.467784  112472 pod_ready.go:92] pod "etcd-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.467800  112472 pod_ready.go:81] duration metric: took 6.539173ms for pod "etcd-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.467808  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.467853  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889-m02
	I0804 01:29:43.467860  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.467866  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.467871  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.470917  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.471835  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:43.471851  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.471860  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.471865  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.475070  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.475906  112472 pod_ready.go:92] pod "etcd-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.475921  112472 pod_ready.go:81] duration metric: took 8.107274ms for pod "etcd-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.475933  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.629368  112472 request.go:629] Waited for 153.355144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889
	I0804 01:29:43.629458  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889
	I0804 01:29:43.629470  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.629482  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.629489  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.632566  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.828724  112472 request.go:629] Waited for 195.289574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.828815  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:43.828823  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:43.828832  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:43.828838  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:43.831942  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:43.832467  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:43.832489  112472 pod_ready.go:81] duration metric: took 356.548247ms for pod "kube-apiserver-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:43.832502  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.029644  112472 request.go:629] Waited for 197.067109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m02
	I0804 01:29:44.029742  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m02
	I0804 01:29:44.029749  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.029757  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.029761  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.033449  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:44.228825  112472 request.go:629] Waited for 194.401684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:44.228903  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:44.228916  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.228943  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.228947  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.232325  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:44.232807  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:44.232825  112472 pod_ready.go:81] duration metric: took 400.314893ms for pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.232834  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.428855  112472 request.go:629] Waited for 195.944534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889
	I0804 01:29:44.428939  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889
	I0804 01:29:44.428944  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.428952  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.428956  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.432243  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:44.629375  112472 request.go:629] Waited for 196.420241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:44.629453  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:44.629462  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.629473  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.629479  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.632648  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:44.633393  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:44.633412  112472 pod_ready.go:81] duration metric: took 400.571723ms for pod "kube-controller-manager-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.633423  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:44.829633  112472 request.go:629] Waited for 196.137466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m02
	I0804 01:29:44.829734  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m02
	I0804 01:29:44.829744  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:44.829753  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:44.829760  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:44.833570  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.029822  112472 request.go:629] Waited for 195.371221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:45.029890  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:45.029897  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.029908  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.029916  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.033380  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.034127  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:45.034152  112472 pod_ready.go:81] duration metric: took 400.722428ms for pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.034166  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-56twz" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.229395  112472 request.go:629] Waited for 195.115343ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56twz
	I0804 01:29:45.229470  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56twz
	I0804 01:29:45.229478  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.229490  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.229498  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.232707  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.428822  112472 request.go:629] Waited for 195.313836ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:45.428923  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:45.428932  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.428943  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.428949  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.432466  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.433542  112472 pod_ready.go:92] pod "kube-proxy-56twz" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:45.433578  112472 pod_ready.go:81] duration metric: took 399.403294ms for pod "kube-proxy-56twz" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.433590  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v4j77" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.629724  112472 request.go:629] Waited for 196.037328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4j77
	I0804 01:29:45.629829  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4j77
	I0804 01:29:45.629842  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.629855  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.629863  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.633517  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:45.829732  112472 request.go:629] Waited for 195.399679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:45.829805  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:45.829815  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:45.829829  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:45.829840  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:45.834582  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:45.835514  112472 pod_ready.go:92] pod "kube-proxy-v4j77" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:45.835533  112472 pod_ready.go:81] duration metric: took 401.935529ms for pod "kube-proxy-v4j77" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:45.835542  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:46.029709  112472 request.go:629] Waited for 194.088454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889
	I0804 01:29:46.029772  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889
	I0804 01:29:46.029777  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.029785  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.029789  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.032566  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:46.229545  112472 request.go:629] Waited for 196.39197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:46.229616  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:29:46.229623  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.229636  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.229643  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.232829  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:46.233633  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:46.233653  112472 pod_ready.go:81] duration metric: took 398.104737ms for pod "kube-scheduler-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:46.233663  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:46.429795  112472 request.go:629] Waited for 196.040676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m02
	I0804 01:29:46.429857  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m02
	I0804 01:29:46.429863  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.429871  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.429876  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.432532  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:29:46.629547  112472 request.go:629] Waited for 196.376781ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:46.629636  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:29:46.629644  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.629653  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.629659  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.632739  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:46.633308  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:29:46.633326  112472 pod_ready.go:81] duration metric: took 399.657247ms for pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:29:46.633337  112472 pod_ready.go:38] duration metric: took 3.200048772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 01:29:46.633365  112472 api_server.go:52] waiting for apiserver process to appear ...
	I0804 01:29:46.633423  112472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:29:46.648549  112472 api_server.go:72] duration metric: took 22.52698207s to wait for apiserver process to appear ...
	I0804 01:29:46.648583  112472 api_server.go:88] waiting for apiserver healthz status ...
	I0804 01:29:46.648607  112472 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0804 01:29:46.653004  112472 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0804 01:29:46.653079  112472 round_trippers.go:463] GET https://192.168.39.12:8443/version
	I0804 01:29:46.653086  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.653094  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.653103  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.654119  112472 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0804 01:29:46.654221  112472 api_server.go:141] control plane version: v1.30.3
	I0804 01:29:46.654238  112472 api_server.go:131] duration metric: took 5.648581ms to wait for apiserver health ...
	I0804 01:29:46.654246  112472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 01:29:46.829640  112472 request.go:629] Waited for 175.323296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:46.829723  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:46.829729  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:46.829737  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:46.829741  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:46.837657  112472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0804 01:29:46.842637  112472 system_pods.go:59] 17 kube-system pods found
	I0804 01:29:46.842669  112472 system_pods.go:61] "coredns-7db6d8ff4d-b8ds7" [b7c997bc-312e-488c-ad30-0647eb5b757e] Running
	I0804 01:29:46.842673  112472 system_pods.go:61] "coredns-7db6d8ff4d-ddb5m" [186999bf-43e4-43e7-a5dc-c84331a2f521] Running
	I0804 01:29:46.842677  112472 system_pods.go:61] "etcd-ha-998889" [82415e8c-a79b-41f3-b6b6-86e1b4e63951] Running
	I0804 01:29:46.842681  112472 system_pods.go:61] "etcd-ha-998889-m02" [0c0646fc-8ef5-47e1-a6c2-59708d88fa7d] Running
	I0804 01:29:46.842684  112472 system_pods.go:61] "kindnet-gc22h" [db5d63c3-4231-45ae-a2e2-b48fbf64be91] Running
	I0804 01:29:46.842688  112472 system_pods.go:61] "kindnet-mm9t2" [46ee5b5b-81d3-4acc-aee0-d57be09c3858] Running
	I0804 01:29:46.842691  112472 system_pods.go:61] "kube-apiserver-ha-998889" [dc07f6be-b73f-44ce-a196-ad51d034ae1d] Running
	I0804 01:29:46.842695  112472 system_pods.go:61] "kube-apiserver-ha-998889-m02" [b462bad7-5f36-491b-a021-de1943fa91ea] Running
	I0804 01:29:46.842699  112472 system_pods.go:61] "kube-controller-manager-ha-998889" [5680756c-077a-4115-abc9-7495c9b5c725] Running
	I0804 01:29:46.842703  112472 system_pods.go:61] "kube-controller-manager-ha-998889-m02" [17fae882-3021-45ef-8e54-70097546e0dc] Running
	I0804 01:29:46.842707  112472 system_pods.go:61] "kube-proxy-56twz" [e9fc726d-cf1c-44a8-839e-84b90f69609f] Running
	I0804 01:29:46.842710  112472 system_pods.go:61] "kube-proxy-v4j77" [87ac4988-17c6-4628-afde-1e1a65c8b66e] Running
	I0804 01:29:46.842714  112472 system_pods.go:61] "kube-scheduler-ha-998889" [2314946f-1cc5-4501-a024-f91be0ef6af9] Running
	I0804 01:29:46.842718  112472 system_pods.go:61] "kube-scheduler-ha-998889-m02" [895df81c-737f-430a-bbd5-9536fde88fa7] Running
	I0804 01:29:46.842721  112472 system_pods.go:61] "kube-vip-ha-998889" [1baf4284-e439-4cfa-b46f-dc618a37580b] Running
	I0804 01:29:46.842725  112472 system_pods.go:61] "kube-vip-ha-998889-m02" [379a3823-ba56-4127-a13b-133808a3c1a3] Running
	I0804 01:29:46.842728  112472 system_pods.go:61] "storage-provisioner" [b2eb4a37-052e-4e8e-9b0d-d58847698eeb] Running
	I0804 01:29:46.842734  112472 system_pods.go:74] duration metric: took 188.48255ms to wait for pod list to return data ...
	I0804 01:29:46.842745  112472 default_sa.go:34] waiting for default service account to be created ...
	I0804 01:29:47.029218  112472 request.go:629] Waited for 186.378146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0804 01:29:47.029298  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0804 01:29:47.029311  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:47.029323  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:47.029333  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:47.033889  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:29:47.034176  112472 default_sa.go:45] found service account: "default"
	I0804 01:29:47.034201  112472 default_sa.go:55] duration metric: took 191.448723ms for default service account to be created ...
	I0804 01:29:47.034213  112472 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 01:29:47.229666  112472 request.go:629] Waited for 195.365938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:47.229731  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:29:47.229737  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:47.229744  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:47.229748  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:47.235971  112472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 01:29:47.240311  112472 system_pods.go:86] 17 kube-system pods found
	I0804 01:29:47.240347  112472 system_pods.go:89] "coredns-7db6d8ff4d-b8ds7" [b7c997bc-312e-488c-ad30-0647eb5b757e] Running
	I0804 01:29:47.240353  112472 system_pods.go:89] "coredns-7db6d8ff4d-ddb5m" [186999bf-43e4-43e7-a5dc-c84331a2f521] Running
	I0804 01:29:47.240358  112472 system_pods.go:89] "etcd-ha-998889" [82415e8c-a79b-41f3-b6b6-86e1b4e63951] Running
	I0804 01:29:47.240362  112472 system_pods.go:89] "etcd-ha-998889-m02" [0c0646fc-8ef5-47e1-a6c2-59708d88fa7d] Running
	I0804 01:29:47.240366  112472 system_pods.go:89] "kindnet-gc22h" [db5d63c3-4231-45ae-a2e2-b48fbf64be91] Running
	I0804 01:29:47.240371  112472 system_pods.go:89] "kindnet-mm9t2" [46ee5b5b-81d3-4acc-aee0-d57be09c3858] Running
	I0804 01:29:47.240375  112472 system_pods.go:89] "kube-apiserver-ha-998889" [dc07f6be-b73f-44ce-a196-ad51d034ae1d] Running
	I0804 01:29:47.240382  112472 system_pods.go:89] "kube-apiserver-ha-998889-m02" [b462bad7-5f36-491b-a021-de1943fa91ea] Running
	I0804 01:29:47.240386  112472 system_pods.go:89] "kube-controller-manager-ha-998889" [5680756c-077a-4115-abc9-7495c9b5c725] Running
	I0804 01:29:47.240391  112472 system_pods.go:89] "kube-controller-manager-ha-998889-m02" [17fae882-3021-45ef-8e54-70097546e0dc] Running
	I0804 01:29:47.240395  112472 system_pods.go:89] "kube-proxy-56twz" [e9fc726d-cf1c-44a8-839e-84b90f69609f] Running
	I0804 01:29:47.240400  112472 system_pods.go:89] "kube-proxy-v4j77" [87ac4988-17c6-4628-afde-1e1a65c8b66e] Running
	I0804 01:29:47.240404  112472 system_pods.go:89] "kube-scheduler-ha-998889" [2314946f-1cc5-4501-a024-f91be0ef6af9] Running
	I0804 01:29:47.240410  112472 system_pods.go:89] "kube-scheduler-ha-998889-m02" [895df81c-737f-430a-bbd5-9536fde88fa7] Running
	I0804 01:29:47.240414  112472 system_pods.go:89] "kube-vip-ha-998889" [1baf4284-e439-4cfa-b46f-dc618a37580b] Running
	I0804 01:29:47.240417  112472 system_pods.go:89] "kube-vip-ha-998889-m02" [379a3823-ba56-4127-a13b-133808a3c1a3] Running
	I0804 01:29:47.240421  112472 system_pods.go:89] "storage-provisioner" [b2eb4a37-052e-4e8e-9b0d-d58847698eeb] Running
	I0804 01:29:47.240432  112472 system_pods.go:126] duration metric: took 206.208464ms to wait for k8s-apps to be running ...
	I0804 01:29:47.240441  112472 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 01:29:47.240489  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:29:47.255788  112472 system_svc.go:56] duration metric: took 15.334437ms WaitForService to wait for kubelet
	I0804 01:29:47.255822  112472 kubeadm.go:582] duration metric: took 23.134258105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 01:29:47.255849  112472 node_conditions.go:102] verifying NodePressure condition ...
	I0804 01:29:47.429326  112472 request.go:629] Waited for 173.355911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes
	I0804 01:29:47.429408  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes
	I0804 01:29:47.429419  112472 round_trippers.go:469] Request Headers:
	I0804 01:29:47.429428  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:29:47.429436  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:29:47.432960  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:29:47.433843  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:29:47.433873  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:29:47.433889  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:29:47.433895  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:29:47.433915  112472 node_conditions.go:105] duration metric: took 178.056963ms to run NodePressure ...
	I0804 01:29:47.433931  112472 start.go:241] waiting for startup goroutines ...
	I0804 01:29:47.433968  112472 start.go:255] writing updated cluster config ...
	I0804 01:29:47.435993  112472 out.go:177] 
	I0804 01:29:47.437444  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:29:47.437531  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:29:47.439114  112472 out.go:177] * Starting "ha-998889-m03" control-plane node in "ha-998889" cluster
	I0804 01:29:47.440148  112472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:29:47.440173  112472 cache.go:56] Caching tarball of preloaded images
	I0804 01:29:47.440273  112472 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 01:29:47.440285  112472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 01:29:47.440381  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:29:47.440559  112472 start.go:360] acquireMachinesLock for ha-998889-m03: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 01:29:47.440609  112472 start.go:364] duration metric: took 30.779µs to acquireMachinesLock for "ha-998889-m03"
	I0804 01:29:47.440631  112472 start.go:93] Provisioning new machine with config: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:29:47.440776  112472 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0804 01:29:47.442174  112472 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 01:29:47.442338  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:29:47.442388  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:29:47.457540  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I0804 01:29:47.458045  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:29:47.458603  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:29:47.458628  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:29:47.459051  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:29:47.459247  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetMachineName
	I0804 01:29:47.459429  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:29:47.459594  112472 start.go:159] libmachine.API.Create for "ha-998889" (driver="kvm2")
	I0804 01:29:47.459621  112472 client.go:168] LocalClient.Create starting
	I0804 01:29:47.459659  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem
	I0804 01:29:47.459698  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:29:47.459714  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:29:47.459785  112472 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem
	I0804 01:29:47.459811  112472 main.go:141] libmachine: Decoding PEM data...
	I0804 01:29:47.459828  112472 main.go:141] libmachine: Parsing certificate...
	I0804 01:29:47.459852  112472 main.go:141] libmachine: Running pre-create checks...
	I0804 01:29:47.459863  112472 main.go:141] libmachine: (ha-998889-m03) Calling .PreCreateCheck
	I0804 01:29:47.460095  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetConfigRaw
	I0804 01:29:47.460490  112472 main.go:141] libmachine: Creating machine...
	I0804 01:29:47.460504  112472 main.go:141] libmachine: (ha-998889-m03) Calling .Create
	I0804 01:29:47.460659  112472 main.go:141] libmachine: (ha-998889-m03) Creating KVM machine...
	I0804 01:29:47.461802  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found existing default KVM network
	I0804 01:29:47.462068  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found existing private KVM network mk-ha-998889
	I0804 01:29:47.462227  112472 main.go:141] libmachine: (ha-998889-m03) Setting up store path in /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03 ...
	I0804 01:29:47.462258  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:47.462152  113280 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:29:47.462275  112472 main.go:141] libmachine: (ha-998889-m03) Building disk image from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 01:29:47.462347  112472 main.go:141] libmachine: (ha-998889-m03) Downloading /home/jenkins/minikube-integration/19364-90243/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 01:29:47.712187  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:47.712061  113280 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa...
	I0804 01:29:47.800440  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:47.800294  113280 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/ha-998889-m03.rawdisk...
	I0804 01:29:47.800486  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Writing magic tar header
	I0804 01:29:47.800502  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Writing SSH key tar header
	I0804 01:29:47.800513  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:47.800452  113280 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03 ...
	I0804 01:29:47.800635  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03
	I0804 01:29:47.800661  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03 (perms=drwx------)
	I0804 01:29:47.800669  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines
	I0804 01:29:47.800679  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines (perms=drwxr-xr-x)
	I0804 01:29:47.800687  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube (perms=drwxr-xr-x)
	I0804 01:29:47.800696  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:29:47.800705  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243
	I0804 01:29:47.800717  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 01:29:47.800726  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home/jenkins
	I0804 01:29:47.800737  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Checking permissions on dir: /home
	I0804 01:29:47.800747  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243 (perms=drwxrwxr-x)
	I0804 01:29:47.800762  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Skipping /home - not owner
	I0804 01:29:47.800773  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 01:29:47.800786  112472 main.go:141] libmachine: (ha-998889-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 01:29:47.800797  112472 main.go:141] libmachine: (ha-998889-m03) Creating domain...
	I0804 01:29:47.801883  112472 main.go:141] libmachine: (ha-998889-m03) define libvirt domain using xml: 
	I0804 01:29:47.801914  112472 main.go:141] libmachine: (ha-998889-m03) <domain type='kvm'>
	I0804 01:29:47.801936  112472 main.go:141] libmachine: (ha-998889-m03)   <name>ha-998889-m03</name>
	I0804 01:29:47.801951  112472 main.go:141] libmachine: (ha-998889-m03)   <memory unit='MiB'>2200</memory>
	I0804 01:29:47.801961  112472 main.go:141] libmachine: (ha-998889-m03)   <vcpu>2</vcpu>
	I0804 01:29:47.801970  112472 main.go:141] libmachine: (ha-998889-m03)   <features>
	I0804 01:29:47.801988  112472 main.go:141] libmachine: (ha-998889-m03)     <acpi/>
	I0804 01:29:47.801996  112472 main.go:141] libmachine: (ha-998889-m03)     <apic/>
	I0804 01:29:47.802003  112472 main.go:141] libmachine: (ha-998889-m03)     <pae/>
	I0804 01:29:47.802011  112472 main.go:141] libmachine: (ha-998889-m03)     
	I0804 01:29:47.802017  112472 main.go:141] libmachine: (ha-998889-m03)   </features>
	I0804 01:29:47.802025  112472 main.go:141] libmachine: (ha-998889-m03)   <cpu mode='host-passthrough'>
	I0804 01:29:47.802030  112472 main.go:141] libmachine: (ha-998889-m03)   
	I0804 01:29:47.802035  112472 main.go:141] libmachine: (ha-998889-m03)   </cpu>
	I0804 01:29:47.802043  112472 main.go:141] libmachine: (ha-998889-m03)   <os>
	I0804 01:29:47.802049  112472 main.go:141] libmachine: (ha-998889-m03)     <type>hvm</type>
	I0804 01:29:47.802084  112472 main.go:141] libmachine: (ha-998889-m03)     <boot dev='cdrom'/>
	I0804 01:29:47.802116  112472 main.go:141] libmachine: (ha-998889-m03)     <boot dev='hd'/>
	I0804 01:29:47.802127  112472 main.go:141] libmachine: (ha-998889-m03)     <bootmenu enable='no'/>
	I0804 01:29:47.802138  112472 main.go:141] libmachine: (ha-998889-m03)   </os>
	I0804 01:29:47.802146  112472 main.go:141] libmachine: (ha-998889-m03)   <devices>
	I0804 01:29:47.802152  112472 main.go:141] libmachine: (ha-998889-m03)     <disk type='file' device='cdrom'>
	I0804 01:29:47.802163  112472 main.go:141] libmachine: (ha-998889-m03)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/boot2docker.iso'/>
	I0804 01:29:47.802170  112472 main.go:141] libmachine: (ha-998889-m03)       <target dev='hdc' bus='scsi'/>
	I0804 01:29:47.802176  112472 main.go:141] libmachine: (ha-998889-m03)       <readonly/>
	I0804 01:29:47.802182  112472 main.go:141] libmachine: (ha-998889-m03)     </disk>
	I0804 01:29:47.802189  112472 main.go:141] libmachine: (ha-998889-m03)     <disk type='file' device='disk'>
	I0804 01:29:47.802197  112472 main.go:141] libmachine: (ha-998889-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 01:29:47.802205  112472 main.go:141] libmachine: (ha-998889-m03)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/ha-998889-m03.rawdisk'/>
	I0804 01:29:47.802215  112472 main.go:141] libmachine: (ha-998889-m03)       <target dev='hda' bus='virtio'/>
	I0804 01:29:47.802222  112472 main.go:141] libmachine: (ha-998889-m03)     </disk>
	I0804 01:29:47.802241  112472 main.go:141] libmachine: (ha-998889-m03)     <interface type='network'>
	I0804 01:29:47.802253  112472 main.go:141] libmachine: (ha-998889-m03)       <source network='mk-ha-998889'/>
	I0804 01:29:47.802264  112472 main.go:141] libmachine: (ha-998889-m03)       <model type='virtio'/>
	I0804 01:29:47.802272  112472 main.go:141] libmachine: (ha-998889-m03)     </interface>
	I0804 01:29:47.802282  112472 main.go:141] libmachine: (ha-998889-m03)     <interface type='network'>
	I0804 01:29:47.802291  112472 main.go:141] libmachine: (ha-998889-m03)       <source network='default'/>
	I0804 01:29:47.802302  112472 main.go:141] libmachine: (ha-998889-m03)       <model type='virtio'/>
	I0804 01:29:47.802313  112472 main.go:141] libmachine: (ha-998889-m03)     </interface>
	I0804 01:29:47.802324  112472 main.go:141] libmachine: (ha-998889-m03)     <serial type='pty'>
	I0804 01:29:47.802332  112472 main.go:141] libmachine: (ha-998889-m03)       <target port='0'/>
	I0804 01:29:47.802342  112472 main.go:141] libmachine: (ha-998889-m03)     </serial>
	I0804 01:29:47.802350  112472 main.go:141] libmachine: (ha-998889-m03)     <console type='pty'>
	I0804 01:29:47.802361  112472 main.go:141] libmachine: (ha-998889-m03)       <target type='serial' port='0'/>
	I0804 01:29:47.802369  112472 main.go:141] libmachine: (ha-998889-m03)     </console>
	I0804 01:29:47.802378  112472 main.go:141] libmachine: (ha-998889-m03)     <rng model='virtio'>
	I0804 01:29:47.802388  112472 main.go:141] libmachine: (ha-998889-m03)       <backend model='random'>/dev/random</backend>
	I0804 01:29:47.802398  112472 main.go:141] libmachine: (ha-998889-m03)     </rng>
	I0804 01:29:47.802406  112472 main.go:141] libmachine: (ha-998889-m03)     
	I0804 01:29:47.802415  112472 main.go:141] libmachine: (ha-998889-m03)     
	I0804 01:29:47.802437  112472 main.go:141] libmachine: (ha-998889-m03)   </devices>
	I0804 01:29:47.802456  112472 main.go:141] libmachine: (ha-998889-m03) </domain>
	I0804 01:29:47.802467  112472 main.go:141] libmachine: (ha-998889-m03) 
	I0804 01:29:47.809409  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:2f:96:e2 in network default
	I0804 01:29:47.809984  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:47.810009  112472 main.go:141] libmachine: (ha-998889-m03) Ensuring networks are active...
	I0804 01:29:47.810807  112472 main.go:141] libmachine: (ha-998889-m03) Ensuring network default is active
	I0804 01:29:47.811254  112472 main.go:141] libmachine: (ha-998889-m03) Ensuring network mk-ha-998889 is active
	I0804 01:29:47.811705  112472 main.go:141] libmachine: (ha-998889-m03) Getting domain xml...
	I0804 01:29:47.812654  112472 main.go:141] libmachine: (ha-998889-m03) Creating domain...
	I0804 01:29:49.074803  112472 main.go:141] libmachine: (ha-998889-m03) Waiting to get IP...
	I0804 01:29:49.075504  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:49.075918  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:49.075968  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:49.075908  113280 retry.go:31] will retry after 189.457657ms: waiting for machine to come up
	I0804 01:29:49.267413  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:49.268028  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:49.268065  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:49.267992  113280 retry.go:31] will retry after 365.715137ms: waiting for machine to come up
	I0804 01:29:49.635599  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:49.636060  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:49.636084  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:49.636007  113280 retry.go:31] will retry after 320.225156ms: waiting for machine to come up
	I0804 01:29:49.957564  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:49.958013  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:49.958080  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:49.957983  113280 retry.go:31] will retry after 606.874403ms: waiting for machine to come up
	I0804 01:29:50.566914  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:50.567429  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:50.567459  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:50.567385  113280 retry.go:31] will retry after 709.427152ms: waiting for machine to come up
	I0804 01:29:51.278500  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:51.278940  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:51.279012  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:51.278925  113280 retry.go:31] will retry after 739.069612ms: waiting for machine to come up
	I0804 01:29:52.019405  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:52.020063  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:52.020098  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:52.019997  113280 retry.go:31] will retry after 746.991915ms: waiting for machine to come up
	I0804 01:29:52.768394  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:52.768717  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:52.768746  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:52.768665  113280 retry.go:31] will retry after 1.374146128s: waiting for machine to come up
	I0804 01:29:54.145379  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:54.145892  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:54.145916  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:54.145852  113280 retry.go:31] will retry after 1.561798019s: waiting for machine to come up
	I0804 01:29:55.709100  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:55.709511  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:55.709544  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:55.709458  113280 retry.go:31] will retry after 2.192385477s: waiting for machine to come up
	I0804 01:29:57.903276  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:57.903806  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:57.903838  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:57.903750  113280 retry.go:31] will retry after 1.945348735s: waiting for machine to come up
	I0804 01:29:59.851064  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:29:59.851484  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:29:59.851510  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:29:59.851452  113280 retry.go:31] will retry after 2.313076479s: waiting for machine to come up
	I0804 01:30:02.166675  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:02.167233  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:30:02.167259  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:30:02.167193  113280 retry.go:31] will retry after 3.956837801s: waiting for machine to come up
	I0804 01:30:06.128554  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:06.128904  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find current IP address of domain ha-998889-m03 in network mk-ha-998889
	I0804 01:30:06.128930  112472 main.go:141] libmachine: (ha-998889-m03) DBG | I0804 01:30:06.128865  113280 retry.go:31] will retry after 3.689366809s: waiting for machine to come up
	I0804 01:30:09.820728  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:09.821213  112472 main.go:141] libmachine: (ha-998889-m03) Found IP for machine: 192.168.39.148
	I0804 01:30:09.821245  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has current primary IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:09.821259  112472 main.go:141] libmachine: (ha-998889-m03) Reserving static IP address...
	I0804 01:30:09.821655  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find host DHCP lease matching {name: "ha-998889-m03", mac: "52:54:00:65:ff:5a", ip: "192.168.39.148"} in network mk-ha-998889
	I0804 01:30:09.897493  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Getting to WaitForSSH function...
	I0804 01:30:09.897526  112472 main.go:141] libmachine: (ha-998889-m03) Reserved static IP address: 192.168.39.148
	I0804 01:30:09.897541  112472 main.go:141] libmachine: (ha-998889-m03) Waiting for SSH to be available...
	I0804 01:30:09.900192  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:09.900585  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889
	I0804 01:30:09.900610  112472 main.go:141] libmachine: (ha-998889-m03) DBG | unable to find defined IP address of network mk-ha-998889 interface with MAC address 52:54:00:65:ff:5a
	I0804 01:30:09.900815  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using SSH client type: external
	I0804 01:30:09.900845  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa (-rw-------)
	I0804 01:30:09.900873  112472 main.go:141] libmachine: (ha-998889-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 01:30:09.900893  112472 main.go:141] libmachine: (ha-998889-m03) DBG | About to run SSH command:
	I0804 01:30:09.900906  112472 main.go:141] libmachine: (ha-998889-m03) DBG | exit 0
	I0804 01:30:09.904610  112472 main.go:141] libmachine: (ha-998889-m03) DBG | SSH cmd err, output: exit status 255: 
	I0804 01:30:09.904636  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0804 01:30:09.904647  112472 main.go:141] libmachine: (ha-998889-m03) DBG | command : exit 0
	I0804 01:30:09.904658  112472 main.go:141] libmachine: (ha-998889-m03) DBG | err     : exit status 255
	I0804 01:30:09.904677  112472 main.go:141] libmachine: (ha-998889-m03) DBG | output  : 
	I0804 01:30:12.905342  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Getting to WaitForSSH function...
	I0804 01:30:12.907647  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:12.907990  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:12.908008  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:12.908131  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using SSH client type: external
	I0804 01:30:12.908148  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa (-rw-------)
	I0804 01:30:12.908176  112472 main.go:141] libmachine: (ha-998889-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 01:30:12.908195  112472 main.go:141] libmachine: (ha-998889-m03) DBG | About to run SSH command:
	I0804 01:30:12.908209  112472 main.go:141] libmachine: (ha-998889-m03) DBG | exit 0
	I0804 01:30:13.037574  112472 main.go:141] libmachine: (ha-998889-m03) DBG | SSH cmd err, output: <nil>: 
	I0804 01:30:13.037860  112472 main.go:141] libmachine: (ha-998889-m03) KVM machine creation complete!
	I0804 01:30:13.038261  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetConfigRaw
	I0804 01:30:13.038837  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:13.039026  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:13.039157  112472 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 01:30:13.039173  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:30:13.041139  112472 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 01:30:13.041158  112472 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 01:30:13.041167  112472 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 01:30:13.041173  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.043399  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.043769  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.043798  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.043969  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.044180  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.044362  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.044571  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.044785  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.045039  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.045051  112472 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 01:30:13.161179  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:30:13.161204  112472 main.go:141] libmachine: Detecting the provisioner...
	I0804 01:30:13.161212  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.164284  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.164748  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.164781  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.164997  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.165223  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.165409  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.165554  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.165743  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.165930  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.165943  112472 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 01:30:13.282476  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 01:30:13.282563  112472 main.go:141] libmachine: found compatible host: buildroot
	I0804 01:30:13.282577  112472 main.go:141] libmachine: Provisioning with buildroot...
	I0804 01:30:13.282591  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetMachineName
	I0804 01:30:13.282885  112472 buildroot.go:166] provisioning hostname "ha-998889-m03"
	I0804 01:30:13.282913  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetMachineName
	I0804 01:30:13.283161  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.286094  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.286506  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.286527  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.286720  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.286918  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.287105  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.287259  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.287465  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.287685  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.287698  112472 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-998889-m03 && echo "ha-998889-m03" | sudo tee /etc/hostname
	I0804 01:30:13.420354  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889-m03
	
	I0804 01:30:13.420386  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.422993  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.423428  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.423458  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.423605  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.423805  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.424021  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.424184  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.424342  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.424516  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.424536  112472 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-998889-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-998889-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-998889-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 01:30:13.551466  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:30:13.551509  112472 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 01:30:13.551533  112472 buildroot.go:174] setting up certificates
	I0804 01:30:13.551547  112472 provision.go:84] configureAuth start
	I0804 01:30:13.551561  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetMachineName
	I0804 01:30:13.551907  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:30:13.554723  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.555212  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.555243  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.555328  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.557675  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.558008  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.558035  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.558148  112472 provision.go:143] copyHostCerts
	I0804 01:30:13.558196  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:30:13.558251  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 01:30:13.558263  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:30:13.558364  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 01:30:13.558476  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:30:13.558516  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 01:30:13.558528  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:30:13.558586  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 01:30:13.558661  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:30:13.558683  112472 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 01:30:13.558691  112472 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:30:13.558717  112472 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 01:30:13.558784  112472 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.ha-998889-m03 san=[127.0.0.1 192.168.39.148 ha-998889-m03 localhost minikube]
	I0804 01:30:13.664412  112472 provision.go:177] copyRemoteCerts
	I0804 01:30:13.664474  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 01:30:13.664499  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.667368  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.667684  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.667720  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.667868  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.668059  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.668204  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.668368  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:30:13.761411  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 01:30:13.761490  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 01:30:13.793581  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 01:30:13.793658  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0804 01:30:13.822382  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 01:30:13.822468  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 01:30:13.848437  112472 provision.go:87] duration metric: took 296.872735ms to configureAuth
	I0804 01:30:13.848468  112472 buildroot.go:189] setting minikube options for container-runtime
	I0804 01:30:13.848804  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:30:13.848905  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:13.852406  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.852767  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:13.852846  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:13.852975  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:13.853168  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.853332  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:13.853493  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:13.853655  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:13.853815  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:13.853829  112472 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 01:30:14.128268  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 01:30:14.128296  112472 main.go:141] libmachine: Checking connection to Docker...
	I0804 01:30:14.128305  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetURL
	I0804 01:30:14.129674  112472 main.go:141] libmachine: (ha-998889-m03) DBG | Using libvirt version 6000000
	I0804 01:30:14.132270  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.132741  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.132783  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.132998  112472 main.go:141] libmachine: Docker is up and running!
	I0804 01:30:14.133017  112472 main.go:141] libmachine: Reticulating splines...
	I0804 01:30:14.133027  112472 client.go:171] duration metric: took 26.673394167s to LocalClient.Create
	I0804 01:30:14.133074  112472 start.go:167] duration metric: took 26.67346353s to libmachine.API.Create "ha-998889"
	I0804 01:30:14.133088  112472 start.go:293] postStartSetup for "ha-998889-m03" (driver="kvm2")
	I0804 01:30:14.133121  112472 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 01:30:14.133150  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.133443  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 01:30:14.133476  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:14.135882  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.136213  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.136249  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.136431  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:14.136623  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.136756  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:14.136933  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:30:14.224635  112472 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 01:30:14.229334  112472 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 01:30:14.229381  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 01:30:14.229455  112472 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 01:30:14.229530  112472 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 01:30:14.229541  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 01:30:14.229636  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 01:30:14.239822  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:30:14.268485  112472 start.go:296] duration metric: took 135.379938ms for postStartSetup
	I0804 01:30:14.268543  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetConfigRaw
	I0804 01:30:14.269200  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:30:14.271918  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.272262  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.272292  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.272695  112472 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:30:14.272949  112472 start.go:128] duration metric: took 26.832159097s to createHost
	I0804 01:30:14.272979  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:14.275655  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.276002  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.276026  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.276211  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:14.276420  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.276595  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.276777  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:14.276968  112472 main.go:141] libmachine: Using SSH client type: native
	I0804 01:30:14.277160  112472 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0804 01:30:14.277174  112472 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 01:30:14.394389  112472 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722735014.374072982
	
	I0804 01:30:14.394416  112472 fix.go:216] guest clock: 1722735014.374072982
	I0804 01:30:14.394426  112472 fix.go:229] Guest: 2024-08-04 01:30:14.374072982 +0000 UTC Remote: 2024-08-04 01:30:14.272965577 +0000 UTC m=+160.274827793 (delta=101.107405ms)
	I0804 01:30:14.394448  112472 fix.go:200] guest clock delta is within tolerance: 101.107405ms
	I0804 01:30:14.394455  112472 start.go:83] releasing machines lock for "ha-998889-m03", held for 26.953834041s
	I0804 01:30:14.394480  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.394787  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:30:14.397280  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.397679  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.397707  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.399948  112472 out.go:177] * Found network options:
	I0804 01:30:14.401274  112472 out.go:177]   - NO_PROXY=192.168.39.12,192.168.39.200
	W0804 01:30:14.402466  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	W0804 01:30:14.402488  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 01:30:14.402506  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.403106  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.403327  112472 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:30:14.403436  112472 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 01:30:14.403482  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	W0804 01:30:14.403591  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	W0804 01:30:14.403610  112472 proxy.go:119] fail to check proxy env: Error ip not in block
	I0804 01:30:14.403668  112472 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 01:30:14.403686  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:30:14.406583  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.406912  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.407309  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.407336  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.407343  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:14.407462  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:14.407483  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:14.407535  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.407638  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:30:14.407751  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:14.407872  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:30:14.407967  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:30:14.408034  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:30:14.408175  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:30:14.661633  112472 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 01:30:14.668852  112472 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 01:30:14.668933  112472 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 01:30:14.686275  112472 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 01:30:14.686304  112472 start.go:495] detecting cgroup driver to use...
	I0804 01:30:14.686386  112472 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 01:30:14.707419  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 01:30:14.721366  112472 docker.go:217] disabling cri-docker service (if available) ...
	I0804 01:30:14.721433  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 01:30:14.736634  112472 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 01:30:14.752510  112472 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 01:30:14.871429  112472 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 01:30:15.053551  112472 docker.go:233] disabling docker service ...
	I0804 01:30:15.053634  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 01:30:15.068636  112472 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 01:30:15.082000  112472 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 01:30:15.199277  112472 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 01:30:15.319789  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 01:30:15.335346  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 01:30:15.356824  112472 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 01:30:15.356888  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.370341  112472 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 01:30:15.370413  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.385555  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.396720  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.408113  112472 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 01:30:15.419473  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.430763  112472 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.450864  112472 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:30:15.462623  112472 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 01:30:15.472861  112472 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 01:30:15.472956  112472 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 01:30:15.486904  112472 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 01:30:15.496524  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:30:15.619668  112472 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 01:30:15.764119  112472 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 01:30:15.764213  112472 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 01:30:15.769427  112472 start.go:563] Will wait 60s for crictl version
	I0804 01:30:15.769500  112472 ssh_runner.go:195] Run: which crictl
	I0804 01:30:15.773524  112472 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 01:30:15.810930  112472 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 01:30:15.811011  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:30:15.840357  112472 ssh_runner.go:195] Run: crio --version
	I0804 01:30:15.871423  112472 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 01:30:15.872754  112472 out.go:177]   - env NO_PROXY=192.168.39.12
	I0804 01:30:15.874141  112472 out.go:177]   - env NO_PROXY=192.168.39.12,192.168.39.200
	I0804 01:30:15.875479  112472 main.go:141] libmachine: (ha-998889-m03) Calling .GetIP
	I0804 01:30:15.878552  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:15.879139  112472 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:30:15.879165  112472 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:30:15.879390  112472 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 01:30:15.883860  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:30:15.896251  112472 mustload.go:65] Loading cluster: ha-998889
	I0804 01:30:15.896487  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:30:15.896754  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:30:15.896802  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:30:15.912025  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38809
	I0804 01:30:15.912523  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:30:15.913190  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:30:15.913213  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:30:15.913546  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:30:15.913770  112472 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:30:15.915381  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:30:15.915679  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:30:15.915722  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:30:15.930291  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39911
	I0804 01:30:15.930709  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:30:15.931148  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:30:15.931169  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:30:15.931534  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:30:15.931749  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:30:15.931981  112472 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889 for IP: 192.168.39.148
	I0804 01:30:15.931994  112472 certs.go:194] generating shared ca certs ...
	I0804 01:30:15.932028  112472 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:30:15.932178  112472 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 01:30:15.932241  112472 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 01:30:15.932256  112472 certs.go:256] generating profile certs ...
	I0804 01:30:15.932358  112472 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key
	I0804 01:30:15.932391  112472 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cc28b01d
	I0804 01:30:15.932413  112472 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cc28b01d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.200 192.168.39.148 192.168.39.254]
	I0804 01:30:16.080096  112472 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cc28b01d ...
	I0804 01:30:16.080131  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cc28b01d: {Name:mkc85edb2ed057b5fb989579a363ce447c718130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:30:16.080309  112472 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cc28b01d ...
	I0804 01:30:16.080321  112472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cc28b01d: {Name:mkc7544167880e60634768ff5b37bb0473e49d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:30:16.080388  112472 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.cc28b01d -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt
	I0804 01:30:16.080524  112472 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.cc28b01d -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key
	I0804 01:30:16.080682  112472 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key
	I0804 01:30:16.080699  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 01:30:16.080712  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 01:30:16.080725  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 01:30:16.080738  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 01:30:16.080753  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 01:30:16.080766  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 01:30:16.080778  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 01:30:16.080793  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 01:30:16.080853  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 01:30:16.080895  112472 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 01:30:16.080908  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 01:30:16.080937  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 01:30:16.080968  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 01:30:16.081005  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 01:30:16.081066  112472 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:30:16.081099  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:30:16.081113  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 01:30:16.081126  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 01:30:16.081163  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:30:16.084207  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:30:16.084548  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:30:16.084579  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:30:16.084763  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:30:16.085030  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:30:16.085183  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:30:16.085343  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:30:16.161830  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0804 01:30:16.168321  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0804 01:30:16.181751  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0804 01:30:16.186828  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0804 01:30:16.197820  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0804 01:30:16.202400  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0804 01:30:16.214244  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0804 01:30:16.218690  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0804 01:30:16.229869  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0804 01:30:16.234497  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0804 01:30:16.246798  112472 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0804 01:30:16.251327  112472 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0804 01:30:16.263416  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 01:30:16.291661  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 01:30:16.318484  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 01:30:16.346074  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 01:30:16.373940  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0804 01:30:16.398867  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 01:30:16.426843  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 01:30:16.454738  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 01:30:16.481058  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 01:30:16.505569  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 01:30:16.530499  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 01:30:16.556438  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0804 01:30:16.574307  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0804 01:30:16.593420  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0804 01:30:16.611302  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0804 01:30:16.631728  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0804 01:30:16.650414  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0804 01:30:16.671020  112472 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0804 01:30:16.689275  112472 ssh_runner.go:195] Run: openssl version
	I0804 01:30:16.695184  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 01:30:16.706723  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 01:30:16.711610  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 01:30:16.711674  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 01:30:16.717526  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 01:30:16.728558  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 01:30:16.739903  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:30:16.744796  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:30:16.744862  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:30:16.750729  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 01:30:16.763427  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 01:30:16.776126  112472 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 01:30:16.781382  112472 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 01:30:16.781459  112472 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 01:30:16.787358  112472 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 01:30:16.801441  112472 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 01:30:16.806107  112472 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 01:30:16.806180  112472 kubeadm.go:934] updating node {m03 192.168.39.148 8443 v1.30.3 crio true true} ...
	I0804 01:30:16.806283  112472 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-998889-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 01:30:16.806319  112472 kube-vip.go:115] generating kube-vip config ...
	I0804 01:30:16.806365  112472 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 01:30:16.825844  112472 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 01:30:16.825921  112472 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0804 01:30:16.826004  112472 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 01:30:16.836795  112472 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0804 01:30:16.836887  112472 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0804 01:30:16.847722  112472 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0804 01:30:16.847754  112472 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0804 01:30:16.847775  112472 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0804 01:30:16.847782  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:30:16.847786  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 01:30:16.847792  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 01:30:16.847871  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0804 01:30:16.847873  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0804 01:30:16.866525  112472 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 01:30:16.866681  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0804 01:30:16.866720  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0804 01:30:16.866750  112472 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0804 01:30:16.866635  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0804 01:30:16.866789  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0804 01:30:16.899709  112472 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0804 01:30:16.899754  112472 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0804 01:30:17.867269  112472 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0804 01:30:17.878945  112472 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0804 01:30:17.898032  112472 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 01:30:17.916986  112472 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0804 01:30:17.936555  112472 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 01:30:17.941044  112472 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 01:30:17.955915  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:30:18.092240  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:30:18.110849  112472 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:30:18.111231  112472 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:30:18.111280  112472 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:30:18.126563  112472 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0804 01:30:18.127163  112472 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:30:18.127798  112472 main.go:141] libmachine: Using API Version  1
	I0804 01:30:18.127825  112472 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:30:18.128255  112472 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:30:18.128471  112472 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:30:18.128674  112472 start.go:317] joinCluster: &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:30:18.128823  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0804 01:30:18.128844  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:30:18.132258  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:30:18.132695  112472 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:30:18.132732  112472 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:30:18.132913  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:30:18.133115  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:30:18.133281  112472 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:30:18.133447  112472 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:30:18.386102  112472 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:30:18.386168  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token woe9t9.vi1uxuwpaas0hcwg --discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-998889-m03 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443"
	I0804 01:30:41.003939  112472 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token woe9t9.vi1uxuwpaas0hcwg --discovery-token-ca-cert-hash sha256:ad6f90d7a52d8833895810de940d7d5020c800e5f52977d9ebe295f6ef73767e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-998889-m03 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443": (22.617738978s)
	I0804 01:30:41.003983  112472 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0804 01:30:41.698461  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-998889-m03 minikube.k8s.io/updated_at=2024_08_04T01_30_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6 minikube.k8s.io/name=ha-998889 minikube.k8s.io/primary=false
	I0804 01:30:41.843170  112472 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-998889-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0804 01:30:41.972331  112472 start.go:319] duration metric: took 23.843650014s to joinCluster
	I0804 01:30:41.972451  112472 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 01:30:41.972822  112472 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:30:41.973997  112472 out.go:177] * Verifying Kubernetes components...
	I0804 01:30:41.975277  112472 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:30:42.275005  112472 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:30:42.307156  112472 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:30:42.307609  112472 kapi.go:59] client config for ha-998889: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key", CAFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0804 01:30:42.307713  112472 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.12:8443
	I0804 01:30:42.308051  112472 node_ready.go:35] waiting up to 6m0s for node "ha-998889-m03" to be "Ready" ...
	I0804 01:30:42.308170  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:42.308185  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:42.308196  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:42.308204  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:42.311410  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:42.808869  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:42.808900  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:42.808918  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:42.808923  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:42.812787  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:43.308996  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:43.309046  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:43.309060  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:43.309065  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:43.312826  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:43.808495  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:43.808522  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:43.808532  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:43.808538  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:43.812765  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:44.308478  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:44.308509  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:44.308519  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:44.308524  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:44.313466  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:44.314164  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:44.808368  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:44.808393  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:44.808404  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:44.808410  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:44.811824  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:45.308699  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:45.308720  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:45.308730  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:45.308738  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:45.312130  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:45.808965  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:45.808988  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:45.808996  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:45.809000  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:45.812789  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:46.308583  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:46.308613  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:46.308626  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:46.308634  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:46.312136  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:46.809377  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:46.809413  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:46.809426  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:46.809430  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:46.812751  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:46.813645  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:47.309143  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:47.309182  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:47.309193  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:47.309198  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:47.314050  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:47.808301  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:47.808328  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:47.808338  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:47.808342  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:47.812309  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:48.308363  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:48.308390  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:48.308400  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:48.308406  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:48.312924  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:48.809066  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:48.809099  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:48.809109  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:48.809114  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:48.812530  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:49.308430  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:49.308453  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:49.308462  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:49.308468  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:49.312205  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:49.313218  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:49.808688  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:49.808716  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:49.808724  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:49.808729  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:49.812289  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:50.309123  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:50.309150  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:50.309164  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:50.309168  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:50.312828  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:50.809047  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:50.809074  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:50.809085  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:50.809091  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:50.812368  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:51.309245  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:51.309274  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:51.309285  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:51.309291  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:51.313490  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:51.314034  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:51.808304  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:51.808329  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:51.808348  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:51.808352  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:51.811637  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:52.309113  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:52.309140  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:52.309147  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:52.309151  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:52.312552  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:52.808933  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:52.808958  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:52.808966  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:52.808972  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:52.813010  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:53.308307  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:53.308333  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:53.308342  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:53.308347  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:53.312252  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:53.808884  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:53.808908  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:53.808917  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:53.808921  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:53.812577  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:53.815786  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:54.308578  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:54.308603  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:54.308611  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:54.308616  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:54.311890  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:54.808843  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:54.808874  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:54.808886  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:54.808892  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:54.812280  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:55.308795  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:55.308821  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:55.308833  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:55.308840  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:55.312214  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:55.809063  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:55.809088  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:55.809098  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:55.809102  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:55.813432  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:30:56.308394  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:56.308419  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:56.308428  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:56.308431  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:56.311872  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:56.312591  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:56.808943  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:56.808967  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:56.808976  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:56.808980  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:56.812629  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:57.308638  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:57.308662  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:57.308674  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:57.308680  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:57.312518  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:57.809285  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:57.809310  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:57.809318  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:57.809322  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:57.812874  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:58.309200  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:58.309224  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:58.309233  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:58.309236  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:58.313089  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:58.313678  112472 node_ready.go:53] node "ha-998889-m03" has status "Ready":"False"
	I0804 01:30:58.809108  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:58.809132  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:58.809141  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:58.809146  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:58.813028  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:59.309031  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:59.309056  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.309065  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.309068  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.312234  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:59.808448  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:30:59.808472  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.808483  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.808488  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.826718  112472 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0804 01:30:59.828110  112472 node_ready.go:49] node "ha-998889-m03" has status "Ready":"True"
	I0804 01:30:59.828143  112472 node_ready.go:38] duration metric: took 17.520049448s for node "ha-998889-m03" to be "Ready" ...
	I0804 01:30:59.828156  112472 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 01:30:59.828245  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:30:59.828259  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.828270  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.828275  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.838580  112472 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0804 01:30:59.845272  112472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.845380  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-b8ds7
	I0804 01:30:59.845391  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.845401  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.845407  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.849095  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:59.849914  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:30:59.849928  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.849936  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.849941  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.852403  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.852929  112472 pod_ready.go:92] pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace has status "Ready":"True"
	I0804 01:30:59.852945  112472 pod_ready.go:81] duration metric: took 7.648649ms for pod "coredns-7db6d8ff4d-b8ds7" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.852954  112472 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.853003  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ddb5m
	I0804 01:30:59.853010  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.853017  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.853020  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.855735  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.856353  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:30:59.856367  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.856376  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.856383  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.862863  112472 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0804 01:30:59.863413  112472 pod_ready.go:92] pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace has status "Ready":"True"
	I0804 01:30:59.863439  112472 pod_ready.go:81] duration metric: took 10.477352ms for pod "coredns-7db6d8ff4d-ddb5m" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.863452  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.863522  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889
	I0804 01:30:59.863532  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.863543  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.863548  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.865872  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.866493  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:30:59.866511  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.866519  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.866522  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.868836  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.869558  112472 pod_ready.go:92] pod "etcd-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:30:59.869582  112472 pod_ready.go:81] duration metric: took 6.121811ms for pod "etcd-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.869594  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.869702  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889-m02
	I0804 01:30:59.869716  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.869726  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.869733  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.872935  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:30:59.873789  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:30:59.873803  112472 round_trippers.go:469] Request Headers:
	I0804 01:30:59.873810  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:30:59.873814  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:30:59.876184  112472 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0804 01:30:59.876681  112472 pod_ready.go:92] pod "etcd-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:30:59.876700  112472 pod_ready.go:81] duration metric: took 7.098495ms for pod "etcd-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:30:59.876711  112472 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.009081  112472 request.go:629] Waited for 132.282502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889-m03
	I0804 01:31:00.009145  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/etcd-ha-998889-m03
	I0804 01:31:00.009152  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.009160  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.009164  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.012991  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:00.209108  112472 request.go:629] Waited for 195.384298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:00.209180  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:00.209185  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.209193  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.209199  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.212249  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:00.213049  112472 pod_ready.go:92] pod "etcd-ha-998889-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:00.213072  112472 pod_ready.go:81] duration metric: took 336.352876ms for pod "etcd-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.213095  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.409304  112472 request.go:629] Waited for 196.122455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889
	I0804 01:31:00.409438  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889
	I0804 01:31:00.409453  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.409464  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.409472  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.413050  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:00.608903  112472 request.go:629] Waited for 194.997248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:00.608983  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:00.608991  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.608999  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.609006  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.612483  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:00.613128  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:00.613158  112472 pod_ready.go:81] duration metric: took 400.051229ms for pod "kube-apiserver-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.613171  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:00.809394  112472 request.go:629] Waited for 196.092914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m02
	I0804 01:31:00.809483  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m02
	I0804 01:31:00.809494  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:00.809502  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:00.809510  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:00.813330  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.008719  112472 request.go:629] Waited for 194.195442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:01.008812  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:01.008818  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.008826  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.008832  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.012244  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.013108  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:01.013127  112472 pod_ready.go:81] duration metric: took 399.947721ms for pod "kube-apiserver-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.013137  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.209257  112472 request.go:629] Waited for 196.041527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m03
	I0804 01:31:01.209339  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889-m03
	I0804 01:31:01.209347  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.209376  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.209387  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.212936  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.409296  112472 request.go:629] Waited for 195.427061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:01.409386  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:01.409393  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.409403  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.409409  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.412961  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.413564  112472 pod_ready.go:92] pod "kube-apiserver-ha-998889-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:01.413585  112472 pod_ready.go:81] duration metric: took 400.440867ms for pod "kube-apiserver-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.413600  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.608483  112472 request.go:629] Waited for 194.807036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889
	I0804 01:31:01.608576  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889
	I0804 01:31:01.608588  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.608599  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.608608  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.612025  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.809427  112472 request.go:629] Waited for 196.415288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:01.809528  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:01.809540  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:01.809552  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:01.809563  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:01.813110  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:01.813836  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:01.813858  112472 pod_ready.go:81] duration metric: took 400.250706ms for pod "kube-controller-manager-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:01.813868  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.008956  112472 request.go:629] Waited for 195.007111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m02
	I0804 01:31:02.009023  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m02
	I0804 01:31:02.009032  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.009043  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.009053  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.013144  112472 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0804 01:31:02.209424  112472 request.go:629] Waited for 195.382799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:02.209482  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:02.209487  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.209500  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.209506  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.213058  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:02.213777  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:02.213798  112472 pod_ready.go:81] duration metric: took 399.923508ms for pod "kube-controller-manager-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.213807  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.408974  112472 request.go:629] Waited for 195.100368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m03
	I0804 01:31:02.409073  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889-m03
	I0804 01:31:02.409081  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.409089  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.409093  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.412322  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:02.609305  112472 request.go:629] Waited for 196.268064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:02.609402  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:02.609411  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.609423  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.609432  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.612667  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:02.613449  112472 pod_ready.go:92] pod "kube-controller-manager-ha-998889-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:02.613477  112472 pod_ready.go:81] duration metric: took 399.661848ms for pod "kube-controller-manager-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.613490  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-56twz" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:02.809542  112472 request.go:629] Waited for 195.946316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56twz
	I0804 01:31:02.809628  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-56twz
	I0804 01:31:02.809640  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:02.809650  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:02.809660  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:02.813159  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.009497  112472 request.go:629] Waited for 195.334978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:03.009573  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:03.009580  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.009591  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.009616  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.013257  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.013965  112472 pod_ready.go:92] pod "kube-proxy-56twz" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:03.013990  112472 pod_ready.go:81] duration metric: took 400.490961ms for pod "kube-proxy-56twz" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.014001  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v4j77" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.208538  112472 request.go:629] Waited for 194.457271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4j77
	I0804 01:31:03.208640  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v4j77
	I0804 01:31:03.208653  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.208664  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.208674  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.212345  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.408592  112472 request.go:629] Waited for 195.291669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:03.408692  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:03.408703  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.408711  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.408716  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.412265  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.412890  112472 pod_ready.go:92] pod "kube-proxy-v4j77" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:03.412913  112472 pod_ready.go:81] duration metric: took 398.906611ms for pod "kube-proxy-v4j77" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.412922  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wj5z9" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.609092  112472 request.go:629] Waited for 196.107713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wj5z9
	I0804 01:31:03.609176  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wj5z9
	I0804 01:31:03.609186  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.609194  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.609199  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.613145  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.809439  112472 request.go:629] Waited for 195.396824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:03.809543  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:03.809555  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:03.809569  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:03.809577  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:03.813455  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:03.814254  112472 pod_ready.go:92] pod "kube-proxy-wj5z9" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:03.814279  112472 pod_ready.go:81] duration metric: took 401.349853ms for pod "kube-proxy-wj5z9" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:03.814292  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.009381  112472 request.go:629] Waited for 194.978939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889
	I0804 01:31:04.009442  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889
	I0804 01:31:04.009447  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.009454  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.009460  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.012698  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:04.208984  112472 request.go:629] Waited for 195.727805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:04.209062  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889
	I0804 01:31:04.209067  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.209076  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.209081  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.212897  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:04.213751  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:04.213776  112472 pod_ready.go:81] duration metric: took 399.475908ms for pod "kube-scheduler-ha-998889" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.213786  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.408777  112472 request.go:629] Waited for 194.906433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m02
	I0804 01:31:04.408848  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m02
	I0804 01:31:04.408854  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.408861  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.408871  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.412642  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:04.609010  112472 request.go:629] Waited for 195.402222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:04.609081  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m02
	I0804 01:31:04.609087  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.609095  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.609099  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.612847  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:04.613707  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:04.613729  112472 pod_ready.go:81] duration metric: took 399.935389ms for pod "kube-scheduler-ha-998889-m02" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.613742  112472 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:04.808754  112472 request.go:629] Waited for 194.934148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m03
	I0804 01:31:04.808829  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-998889-m03
	I0804 01:31:04.808834  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:04.808846  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:04.808849  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:04.812481  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:05.008793  112472 request.go:629] Waited for 195.369713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:05.008876  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes/ha-998889-m03
	I0804 01:31:05.008882  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.008890  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.008894  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.012567  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:05.013447  112472 pod_ready.go:92] pod "kube-scheduler-ha-998889-m03" in "kube-system" namespace has status "Ready":"True"
	I0804 01:31:05.013471  112472 pod_ready.go:81] duration metric: took 399.720375ms for pod "kube-scheduler-ha-998889-m03" in "kube-system" namespace to be "Ready" ...
	I0804 01:31:05.013487  112472 pod_ready.go:38] duration metric: took 5.185318039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0804 01:31:05.013508  112472 api_server.go:52] waiting for apiserver process to appear ...
	I0804 01:31:05.013572  112472 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:31:05.031163  112472 api_server.go:72] duration metric: took 23.05865127s to wait for apiserver process to appear ...
	I0804 01:31:05.031198  112472 api_server.go:88] waiting for apiserver healthz status ...
	I0804 01:31:05.031220  112472 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0804 01:31:05.035658  112472 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0804 01:31:05.035721  112472 round_trippers.go:463] GET https://192.168.39.12:8443/version
	I0804 01:31:05.035728  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.035736  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.035742  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.036644  112472 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0804 01:31:05.036704  112472 api_server.go:141] control plane version: v1.30.3
	I0804 01:31:05.036714  112472 api_server.go:131] duration metric: took 5.510033ms to wait for apiserver health ...
	I0804 01:31:05.036724  112472 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 01:31:05.209160  112472 request.go:629] Waited for 172.366452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:31:05.209257  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:31:05.209273  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.209285  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.209297  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.216801  112472 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0804 01:31:05.223052  112472 system_pods.go:59] 24 kube-system pods found
	I0804 01:31:05.223085  112472 system_pods.go:61] "coredns-7db6d8ff4d-b8ds7" [b7c997bc-312e-488c-ad30-0647eb5b757e] Running
	I0804 01:31:05.223090  112472 system_pods.go:61] "coredns-7db6d8ff4d-ddb5m" [186999bf-43e4-43e7-a5dc-c84331a2f521] Running
	I0804 01:31:05.223094  112472 system_pods.go:61] "etcd-ha-998889" [82415e8c-a79b-41f3-b6b6-86e1b4e63951] Running
	I0804 01:31:05.223097  112472 system_pods.go:61] "etcd-ha-998889-m02" [0c0646fc-8ef5-47e1-a6c2-59708d88fa7d] Running
	I0804 01:31:05.223100  112472 system_pods.go:61] "etcd-ha-998889-m03" [6d4964c1-5a0a-4f37-900d-5b7746fab7ec] Running
	I0804 01:31:05.223103  112472 system_pods.go:61] "kindnet-gc22h" [db5d63c3-4231-45ae-a2e2-b48fbf64be91] Running
	I0804 01:31:05.223106  112472 system_pods.go:61] "kindnet-mm9t2" [46ee5b5b-81d3-4acc-aee0-d57be09c3858] Running
	I0804 01:31:05.223109  112472 system_pods.go:61] "kindnet-rsp5h" [7db6f750-c2f4-404f-8ca1-49365012789d] Running
	I0804 01:31:05.223112  112472 system_pods.go:61] "kube-apiserver-ha-998889" [dc07f6be-b73f-44ce-a196-ad51d034ae1d] Running
	I0804 01:31:05.223115  112472 system_pods.go:61] "kube-apiserver-ha-998889-m02" [b462bad7-5f36-491b-a021-de1943fa91ea] Running
	I0804 01:31:05.223118  112472 system_pods.go:61] "kube-apiserver-ha-998889-m03" [836845ff-1fd9-45a1-b3d1-2bccf0cde74a] Running
	I0804 01:31:05.223122  112472 system_pods.go:61] "kube-controller-manager-ha-998889" [5680756c-077a-4115-abc9-7495c9b5c725] Running
	I0804 01:31:05.223125  112472 system_pods.go:61] "kube-controller-manager-ha-998889-m02" [17fae882-3021-45ef-8e54-70097546e0dc] Running
	I0804 01:31:05.223128  112472 system_pods.go:61] "kube-controller-manager-ha-998889-m03" [ab317268-bc19-4dfd-bcd3-f1fc493b337e] Running
	I0804 01:31:05.223131  112472 system_pods.go:61] "kube-proxy-56twz" [e9fc726d-cf1c-44a8-839e-84b90f69609f] Running
	I0804 01:31:05.223135  112472 system_pods.go:61] "kube-proxy-v4j77" [87ac4988-17c6-4628-afde-1e1a65c8b66e] Running
	I0804 01:31:05.223139  112472 system_pods.go:61] "kube-proxy-wj5z9" [36f91407-7b5a-4101-b7a9-9adbf18a209f] Running
	I0804 01:31:05.223144  112472 system_pods.go:61] "kube-scheduler-ha-998889" [2314946f-1cc5-4501-a024-f91be0ef6af9] Running
	I0804 01:31:05.223147  112472 system_pods.go:61] "kube-scheduler-ha-998889-m02" [895df81c-737f-430a-bbd5-9536fde88fa7] Running
	I0804 01:31:05.223161  112472 system_pods.go:61] "kube-scheduler-ha-998889-m03" [cb00cbab-4deb-4c0f-a4e5-9f853235c528] Running
	I0804 01:31:05.223167  112472 system_pods.go:61] "kube-vip-ha-998889" [1baf4284-e439-4cfa-b46f-dc618a37580b] Running
	I0804 01:31:05.223170  112472 system_pods.go:61] "kube-vip-ha-998889-m02" [379a3823-ba56-4127-a13b-133808a3c1a3] Running
	I0804 01:31:05.223173  112472 system_pods.go:61] "kube-vip-ha-998889-m03" [b7fea609-e938-4537-973d-bd18eaffe449] Running
	I0804 01:31:05.223175  112472 system_pods.go:61] "storage-provisioner" [b2eb4a37-052e-4e8e-9b0d-d58847698eeb] Running
	I0804 01:31:05.223182  112472 system_pods.go:74] duration metric: took 186.451699ms to wait for pod list to return data ...
	I0804 01:31:05.223193  112472 default_sa.go:34] waiting for default service account to be created ...
	I0804 01:31:05.408565  112472 request.go:629] Waited for 185.28427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0804 01:31:05.408629  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/default/serviceaccounts
	I0804 01:31:05.408635  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.408643  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.408648  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.412153  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:05.412312  112472 default_sa.go:45] found service account: "default"
	I0804 01:31:05.412366  112472 default_sa.go:55] duration metric: took 189.127271ms for default service account to be created ...
	I0804 01:31:05.412383  112472 system_pods.go:116] waiting for k8s-apps to be running ...
	I0804 01:31:05.609477  112472 request.go:629] Waited for 196.990181ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:31:05.609540  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/namespaces/kube-system/pods
	I0804 01:31:05.609545  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.609556  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.609566  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.617750  112472 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0804 01:31:05.623803  112472 system_pods.go:86] 24 kube-system pods found
	I0804 01:31:05.623837  112472 system_pods.go:89] "coredns-7db6d8ff4d-b8ds7" [b7c997bc-312e-488c-ad30-0647eb5b757e] Running
	I0804 01:31:05.623843  112472 system_pods.go:89] "coredns-7db6d8ff4d-ddb5m" [186999bf-43e4-43e7-a5dc-c84331a2f521] Running
	I0804 01:31:05.623848  112472 system_pods.go:89] "etcd-ha-998889" [82415e8c-a79b-41f3-b6b6-86e1b4e63951] Running
	I0804 01:31:05.623852  112472 system_pods.go:89] "etcd-ha-998889-m02" [0c0646fc-8ef5-47e1-a6c2-59708d88fa7d] Running
	I0804 01:31:05.623857  112472 system_pods.go:89] "etcd-ha-998889-m03" [6d4964c1-5a0a-4f37-900d-5b7746fab7ec] Running
	I0804 01:31:05.623861  112472 system_pods.go:89] "kindnet-gc22h" [db5d63c3-4231-45ae-a2e2-b48fbf64be91] Running
	I0804 01:31:05.623865  112472 system_pods.go:89] "kindnet-mm9t2" [46ee5b5b-81d3-4acc-aee0-d57be09c3858] Running
	I0804 01:31:05.623869  112472 system_pods.go:89] "kindnet-rsp5h" [7db6f750-c2f4-404f-8ca1-49365012789d] Running
	I0804 01:31:05.623873  112472 system_pods.go:89] "kube-apiserver-ha-998889" [dc07f6be-b73f-44ce-a196-ad51d034ae1d] Running
	I0804 01:31:05.623877  112472 system_pods.go:89] "kube-apiserver-ha-998889-m02" [b462bad7-5f36-491b-a021-de1943fa91ea] Running
	I0804 01:31:05.623881  112472 system_pods.go:89] "kube-apiserver-ha-998889-m03" [836845ff-1fd9-45a1-b3d1-2bccf0cde74a] Running
	I0804 01:31:05.623885  112472 system_pods.go:89] "kube-controller-manager-ha-998889" [5680756c-077a-4115-abc9-7495c9b5c725] Running
	I0804 01:31:05.623889  112472 system_pods.go:89] "kube-controller-manager-ha-998889-m02" [17fae882-3021-45ef-8e54-70097546e0dc] Running
	I0804 01:31:05.623894  112472 system_pods.go:89] "kube-controller-manager-ha-998889-m03" [ab317268-bc19-4dfd-bcd3-f1fc493b337e] Running
	I0804 01:31:05.623902  112472 system_pods.go:89] "kube-proxy-56twz" [e9fc726d-cf1c-44a8-839e-84b90f69609f] Running
	I0804 01:31:05.623909  112472 system_pods.go:89] "kube-proxy-v4j77" [87ac4988-17c6-4628-afde-1e1a65c8b66e] Running
	I0804 01:31:05.623912  112472 system_pods.go:89] "kube-proxy-wj5z9" [36f91407-7b5a-4101-b7a9-9adbf18a209f] Running
	I0804 01:31:05.623916  112472 system_pods.go:89] "kube-scheduler-ha-998889" [2314946f-1cc5-4501-a024-f91be0ef6af9] Running
	I0804 01:31:05.623920  112472 system_pods.go:89] "kube-scheduler-ha-998889-m02" [895df81c-737f-430a-bbd5-9536fde88fa7] Running
	I0804 01:31:05.623924  112472 system_pods.go:89] "kube-scheduler-ha-998889-m03" [cb00cbab-4deb-4c0f-a4e5-9f853235c528] Running
	I0804 01:31:05.623927  112472 system_pods.go:89] "kube-vip-ha-998889" [1baf4284-e439-4cfa-b46f-dc618a37580b] Running
	I0804 01:31:05.623930  112472 system_pods.go:89] "kube-vip-ha-998889-m02" [379a3823-ba56-4127-a13b-133808a3c1a3] Running
	I0804 01:31:05.623934  112472 system_pods.go:89] "kube-vip-ha-998889-m03" [b7fea609-e938-4537-973d-bd18eaffe449] Running
	I0804 01:31:05.623937  112472 system_pods.go:89] "storage-provisioner" [b2eb4a37-052e-4e8e-9b0d-d58847698eeb] Running
	I0804 01:31:05.623944  112472 system_pods.go:126] duration metric: took 211.555603ms to wait for k8s-apps to be running ...
	I0804 01:31:05.623953  112472 system_svc.go:44] waiting for kubelet service to be running ....
	I0804 01:31:05.623998  112472 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:31:05.641051  112472 system_svc.go:56] duration metric: took 17.086327ms WaitForService to wait for kubelet
	I0804 01:31:05.641083  112472 kubeadm.go:582] duration metric: took 23.668574748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 01:31:05.641103  112472 node_conditions.go:102] verifying NodePressure condition ...
	I0804 01:31:05.808449  112472 request.go:629] Waited for 167.265829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.12:8443/api/v1/nodes
	I0804 01:31:05.808512  112472 round_trippers.go:463] GET https://192.168.39.12:8443/api/v1/nodes
	I0804 01:31:05.808518  112472 round_trippers.go:469] Request Headers:
	I0804 01:31:05.808525  112472 round_trippers.go:473]     Accept: application/json, */*
	I0804 01:31:05.808529  112472 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0804 01:31:05.812316  112472 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0804 01:31:05.813391  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:31:05.813419  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:31:05.813437  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:31:05.813443  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:31:05.813448  112472 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 01:31:05.813453  112472 node_conditions.go:123] node cpu capacity is 2
	I0804 01:31:05.813458  112472 node_conditions.go:105] duration metric: took 172.35042ms to run NodePressure ...
	I0804 01:31:05.813478  112472 start.go:241] waiting for startup goroutines ...
	I0804 01:31:05.813503  112472 start.go:255] writing updated cluster config ...
	I0804 01:31:05.813886  112472 ssh_runner.go:195] Run: rm -f paused
	I0804 01:31:05.867511  112472 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0804 01:31:05.869763  112472 out.go:177] * Done! kubectl is now configured to use "ha-998889" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.846754242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735346846731667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1c8c8c7-5a7b-4f18-a3fe-004f8f4e8fd9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.847346495Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19c54302-ee5f-49e5-b975-410850b93e66 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.847402361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19c54302-ee5f-49e5-b975-410850b93e66 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.847631460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735070152311783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927897758714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927838974629,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184,PodSandboxId:ba6b4eda679dcdb869f668ee54e13bcb005892453b7d93545d9fb1187272c1ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722734927727482836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722734915708378127,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172273491
0732540795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a,PodSandboxId:75eeb21e3e26ad4a2f88549b1d69b2d7eea9f374a8c9bcc9498199c375909d55,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227348929
80663215,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353262e960949a9cd83fabcbd9d9ed77,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722734890252370246,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722734890219525088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977,PodSandboxId:35f3b8346489b7b08460445329778ede5fe380943acc3597f287e48353454609,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722734890201105995,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318,PodSandboxId:fdd7687c140dbd7f65cfbe94f261409b7bc235d31c2b6b18b54fa5d1823848b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722734890140566048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19c54302-ee5f-49e5-b975-410850b93e66 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.890807754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f0059c9-a9cc-42e8-8d91-8db2795649fe name=/runtime.v1.RuntimeService/Version
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.890977473Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f0059c9-a9cc-42e8-8d91-8db2795649fe name=/runtime.v1.RuntimeService/Version
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.893007146Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff2fa132-8dca-4171-b270-083bf163b14e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.893680179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735346893654383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff2fa132-8dca-4171-b270-083bf163b14e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.894231284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb1a134f-2a82-4486-bdfb-05360b388e0a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.894301384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb1a134f-2a82-4486-bdfb-05360b388e0a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.894518995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735070152311783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927897758714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927838974629,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184,PodSandboxId:ba6b4eda679dcdb869f668ee54e13bcb005892453b7d93545d9fb1187272c1ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722734927727482836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722734915708378127,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172273491
0732540795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a,PodSandboxId:75eeb21e3e26ad4a2f88549b1d69b2d7eea9f374a8c9bcc9498199c375909d55,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227348929
80663215,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353262e960949a9cd83fabcbd9d9ed77,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722734890252370246,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722734890219525088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977,PodSandboxId:35f3b8346489b7b08460445329778ede5fe380943acc3597f287e48353454609,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722734890201105995,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318,PodSandboxId:fdd7687c140dbd7f65cfbe94f261409b7bc235d31c2b6b18b54fa5d1823848b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722734890140566048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb1a134f-2a82-4486-bdfb-05360b388e0a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.939395453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05cdcbf6-d14f-4d06-beba-8dd50e29c4b5 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.939469092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05cdcbf6-d14f-4d06-beba-8dd50e29c4b5 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.941152941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b41feb6-715e-41e9-803f-0e704526a8b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.941689339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735346941661997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b41feb6-715e-41e9-803f-0e704526a8b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.942484943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37974d76-7c10-43c8-a725-58cf1cfbb5d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.942547465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37974d76-7c10-43c8-a725-58cf1cfbb5d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.942808413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735070152311783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927897758714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927838974629,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184,PodSandboxId:ba6b4eda679dcdb869f668ee54e13bcb005892453b7d93545d9fb1187272c1ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722734927727482836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722734915708378127,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172273491
0732540795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a,PodSandboxId:75eeb21e3e26ad4a2f88549b1d69b2d7eea9f374a8c9bcc9498199c375909d55,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227348929
80663215,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353262e960949a9cd83fabcbd9d9ed77,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722734890252370246,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722734890219525088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977,PodSandboxId:35f3b8346489b7b08460445329778ede5fe380943acc3597f287e48353454609,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722734890201105995,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318,PodSandboxId:fdd7687c140dbd7f65cfbe94f261409b7bc235d31c2b6b18b54fa5d1823848b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722734890140566048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37974d76-7c10-43c8-a725-58cf1cfbb5d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.982705347Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66ff5a61-eef5-4209-ad24-322bfa4b2e17 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.982778168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66ff5a61-eef5-4209-ad24-322bfa4b2e17 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.984371970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd8eeb65-193a-44d7-ae13-0b4ff93f777f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.984922213Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735346984811988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd8eeb65-193a-44d7-ae13-0b4ff93f777f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.985523149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e78ad73-3c71-4780-8700-3885ccac3f3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.985591358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e78ad73-3c71-4780-8700-3885ccac3f3e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:35:46 ha-998889 crio[686]: time="2024-08-04 01:35:46.985801916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735070152311783,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927897758714,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722734927838974629,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184,PodSandboxId:ba6b4eda679dcdb869f668ee54e13bcb005892453b7d93545d9fb1187272c1ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1722734927727482836,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722734915708378127,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172273491
0732540795,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a,PodSandboxId:75eeb21e3e26ad4a2f88549b1d69b2d7eea9f374a8c9bcc9498199c375909d55,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17227348929
80663215,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 353262e960949a9cd83fabcbd9d9ed77,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722734890252370246,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722734890219525088,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977,PodSandboxId:35f3b8346489b7b08460445329778ede5fe380943acc3597f287e48353454609,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722734890201105995,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318,PodSandboxId:fdd7687c140dbd7f65cfbe94f261409b7bc235d31c2b6b18b54fa5d1823848b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722734890140566048,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e78ad73-3c71-4780-8700-3885ccac3f3e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1bb7230a66693       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   5b4550fd8d43d       busybox-fc5497c4f-v468b
	7ce1fc9d2ceb3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   3037e05c8f0db       coredns-7db6d8ff4d-b8ds7
	fe75909603216       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   a3cc1795993d6       coredns-7db6d8ff4d-ddb5m
	426453d5275e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ba6b4eda679dc       storage-provisioner
	e987e973e97a5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   120c9a2eb52aa       kindnet-gc22h
	e32fb23a61d2d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   9689d6db72b02       kube-proxy-56twz
	95795d7d25530       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   75eeb21e3e26a       kube-vip-ha-998889
	cbd934bafbbf1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   580e42f37b240       etcd-ha-998889
	3f264e5c2143d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   c25b0800264cf       kube-scheduler-ha-998889
	0c31b954330c4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   35f3b8346489b       kube-controller-manager-ha-998889
	8d16347be7d62       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   fdd7687c140db       kube-apiserver-ha-998889
	
	
	==> coredns [7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947] <==
	[INFO] 10.244.1.2:49038 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013018026s
	[INFO] 10.244.0.4:40557 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000085015s
	[INFO] 10.244.1.2:53619 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000211794s
	[INFO] 10.244.1.2:44820 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171002s
	[INFO] 10.244.1.2:54493 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154283s
	[INFO] 10.244.1.2:45366 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000188537s
	[INFO] 10.244.1.2:42179 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223485s
	[INFO] 10.244.2.2:48925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000257001s
	[INFO] 10.244.2.2:46133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001441239s
	[INFO] 10.244.2.2:40620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108193s
	[INFO] 10.244.2.2:45555 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071897s
	[INFO] 10.244.0.4:57133 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007622s
	[INFO] 10.244.0.4:45128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012024s
	[INFO] 10.244.0.4:33660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084733s
	[INFO] 10.244.1.2:48368 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133283s
	[INFO] 10.244.1.2:42909 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130327s
	[INFO] 10.244.1.2:54181 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067193s
	[INFO] 10.244.2.2:36881 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125847s
	[INFO] 10.244.2.2:52948 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090317s
	[INFO] 10.244.1.2:34080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132803s
	[INFO] 10.244.1.2:38625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147078s
	[INFO] 10.244.2.2:41049 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205078s
	[INFO] 10.244.2.2:47520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094037s
	[INFO] 10.244.2.2:48004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000211339s
	[INFO] 10.244.0.4:52706 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087998s
	
	
	==> coredns [fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9] <==
	[INFO] 10.244.1.2:57793 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00333282s
	[INFO] 10.244.1.2:54028 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012772192s
	[INFO] 10.244.1.2:49028 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171231s
	[INFO] 10.244.2.2:43384 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001982538s
	[INFO] 10.244.2.2:59450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165578s
	[INFO] 10.244.2.2:44599 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132406s
	[INFO] 10.244.2.2:38280 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086968s
	[INFO] 10.244.0.4:52340 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111664s
	[INFO] 10.244.0.4:55794 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001989197s
	[INFO] 10.244.0.4:56345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371219s
	[INFO] 10.244.0.4:50778 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090371s
	[INFO] 10.244.0.4:47116 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132729s
	[INFO] 10.244.1.2:54780 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104255s
	[INFO] 10.244.2.2:52086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092312s
	[INFO] 10.244.2.2:36096 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008133s
	[INFO] 10.244.0.4:35645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084037s
	[INFO] 10.244.0.4:57031 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00004652s
	[INFO] 10.244.0.4:53264 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005834s
	[INFO] 10.244.0.4:52476 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111362s
	[INFO] 10.244.1.2:39754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000161853s
	[INFO] 10.244.1.2:44320 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018965s
	[INFO] 10.244.2.2:58250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133355s
	[INFO] 10.244.0.4:34248 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137551s
	[INFO] 10.244.0.4:46858 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082831s
	[INFO] 10.244.0.4:52801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017483s
	
	
	==> describe nodes <==
	Name:               ha-998889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T01_28_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:28:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:35:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:31:19 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:31:19 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:31:19 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:31:19 +0000   Sun, 04 Aug 2024 01:28:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-998889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa9bfc18a8dd4a25ae5d0b652cb98f91
	  System UUID:                fa9bfc18-a8dd-4a25-ae5d-0b652cb98f91
	  Boot ID:                    ddede9e4-4547-41a5-820a-f6568caf06a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v468b              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 coredns-7db6d8ff4d-b8ds7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m17s
	  kube-system                 coredns-7db6d8ff4d-ddb5m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m17s
	  kube-system                 etcd-ha-998889                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m31s
	  kube-system                 kindnet-gc22h                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m18s
	  kube-system                 kube-apiserver-ha-998889             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-controller-manager-ha-998889    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-proxy-56twz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 kube-scheduler-ha-998889             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-vip-ha-998889                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m16s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m38s (x7 over 7m38s)  kubelet          Node ha-998889 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m38s (x8 over 7m38s)  kubelet          Node ha-998889 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s (x8 over 7m38s)  kubelet          Node ha-998889 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m31s                  kubelet          Node ha-998889 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s                  kubelet          Node ha-998889 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s                  kubelet          Node ha-998889 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m19s                  node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal  NodeReady                7m                     kubelet          Node ha-998889 status is now: NodeReady
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	
	
	Name:               ha-998889-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_29_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:29:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:32:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 04 Aug 2024 01:31:24 +0000   Sun, 04 Aug 2024 01:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 04 Aug 2024 01:31:24 +0000   Sun, 04 Aug 2024 01:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 04 Aug 2024 01:31:24 +0000   Sun, 04 Aug 2024 01:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 04 Aug 2024 01:31:24 +0000   Sun, 04 Aug 2024 01:32:59 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    ha-998889-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8754ed7ba6c04d5d808bf540e4c5a093
	  System UUID:                8754ed7b-a6c0-4d5d-808b-f540e4c5a093
	  Boot ID:                    aab72127-3c35-4594-8bb2-579116036f9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7jqps                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 etcd-ha-998889-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 kindnet-mm9t2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m27s
	  kube-system                 kube-apiserver-ha-998889-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-controller-manager-ha-998889-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-proxy-v4j77                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-scheduler-ha-998889-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-vip-ha-998889-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m22s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s (x8 over 6m27s)  kubelet          Node ha-998889-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x8 over 6m27s)  kubelet          Node ha-998889-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x7 over 6m27s)  kubelet          Node ha-998889-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           6m8s                   node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  NodeNotReady             2m48s                  node-controller  Node ha-998889-m02 status is now: NodeNotReady
	
	
	Name:               ha-998889-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_30_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:30:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:35:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:31:39 +0000   Sun, 04 Aug 2024 01:30:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:31:39 +0000   Sun, 04 Aug 2024 01:30:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:31:39 +0000   Sun, 04 Aug 2024 01:30:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:31:39 +0000   Sun, 04 Aug 2024 01:30:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-998889-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 49ee34ab17a14b2ba68118c94f92f005
	  System UUID:                49ee34ab-17a1-4b2b-a681-18c94f92f005
	  Boot ID:                    21c0e6a6-ac5b-4e27-887c-e134468a610a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8wnwt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 etcd-ha-998889-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m8s
	  kube-system                 kindnet-rsp5h                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m10s
	  kube-system                 kube-apiserver-ha-998889-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-controller-manager-ha-998889-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-proxy-wj5z9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-scheduler-ha-998889-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-vip-ha-998889-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node ha-998889-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node ha-998889-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s (x7 over 5m10s)  kubelet          Node ha-998889-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	
	
	Name:               ha-998889-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_31_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:31:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:35:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:32:14 +0000   Sun, 04 Aug 2024 01:31:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:32:14 +0000   Sun, 04 Aug 2024 01:31:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:32:14 +0000   Sun, 04 Aug 2024 01:31:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:32:14 +0000   Sun, 04 Aug 2024 01:32:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-998889-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e86557b9788446aca3bd64c7bcc82957
	  System UUID:                e86557b9-7884-46ac-a3bd-64c7bcc82957
	  Boot ID:                    1141c25d-ddf9-401d-80e6-f074ce6278a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5cv7z       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m4s
	  kube-system                 kube-proxy-9qdn6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  Starting                 4m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m4s (x2 over 4m4s)  kubelet          Node ha-998889-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x2 over 4m4s)  kubelet          Node ha-998889-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x2 over 4m4s)  kubelet          Node ha-998889-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal  NodeReady                3m43s                kubelet          Node ha-998889-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 4 01:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050286] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040198] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.778082] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.532172] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.604472] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.869407] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.063774] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058921] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.163748] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.144819] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.274744] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[Aug 4 01:28] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +0.067193] kauditd_printk_skb: 136 callbacks suppressed
	[  +4.231084] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.024644] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.031121] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.102027] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.498623] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.120089] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 4 01:29] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6] <==
	{"level":"warn","ts":"2024-08-04T01:35:47.154599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.168654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.268964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.273455Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.280079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.284833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.302685Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.312762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.319181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.322605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.326104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.333936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.34006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.346161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.350084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.353237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.360034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.365654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.368098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.372271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.375225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.378249Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.383448Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.389045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-04T01:35:47.394429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ab0e927fe14112bb","from":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 01:35:47 up 8 min,  0 users,  load average: 0.30, 0.34, 0.19
	Linux ha-998889 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957] <==
	I0804 01:35:16.899686       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:35:26.892970       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:35:26.893012       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:35:26.893182       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:35:26.893207       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:35:26.893278       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:35:26.893301       1 main.go:299] handling current node
	I0804 01:35:26.893312       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:35:26.893317       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:35:36.891415       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:35:36.891444       1 main.go:299] handling current node
	I0804 01:35:36.891457       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:35:36.891462       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:35:36.891683       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:35:36.891707       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:35:36.891771       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:35:36.891793       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:35:46.892105       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:35:46.892152       1 main.go:299] handling current node
	I0804 01:35:46.892166       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:35:46.892172       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:35:46.892364       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:35:46.892414       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:35:46.892484       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:35:46.892507       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318] <==
	I0804 01:28:29.592911       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0804 01:28:29.646614       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0804 01:29:05.872124       1 trace.go:236] Trace[178944675]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.12,type:*v1.Endpoints,resource:apiServerIPInfo (04-Aug-2024 01:29:05.313) (total time: 558ms):
	Trace[178944675]: ---"initial value restored" 169ms (01:29:05.483)
	Trace[178944675]: ---"Transaction prepared" 128ms (01:29:05.611)
	Trace[178944675]: ---"Txn call completed" 260ms (01:29:05.872)
	Trace[178944675]: [558.473332ms] [558.473332ms] END
	E0804 01:31:11.707811       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34952: use of closed network connection
	E0804 01:31:11.909623       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34958: use of closed network connection
	E0804 01:31:12.098500       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:34984: use of closed network connection
	E0804 01:31:12.301392       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35012: use of closed network connection
	E0804 01:31:12.498051       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35028: use of closed network connection
	E0804 01:31:12.683814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35054: use of closed network connection
	E0804 01:31:12.859058       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35082: use of closed network connection
	E0804 01:31:13.046392       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35100: use of closed network connection
	E0804 01:31:13.254630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35114: use of closed network connection
	E0804 01:31:13.563457       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35128: use of closed network connection
	E0804 01:31:13.742693       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35146: use of closed network connection
	E0804 01:31:13.942191       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35162: use of closed network connection
	E0804 01:31:14.121659       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35174: use of closed network connection
	E0804 01:31:14.301015       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35188: use of closed network connection
	E0804 01:31:14.483485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35204: use of closed network connection
	I0804 01:31:46.648788       1 trace.go:236] Trace[117369386]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:28b6d22f-aae3-4d5c-b499-327f8ad98fed,client:192.168.39.183,api-group:,api-version:v1,name:kube-proxy-thr67,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-thr67,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:DELETE (04-Aug-2024 01:31:45.827) (total time: 821ms):
	Trace[117369386]: ---"Object deleted from database" 383ms (01:31:46.648)
	Trace[117369386]: [821.135533ms] [821.135533ms] END
	
	
	==> kube-controller-manager [0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977] <==
	I0804 01:30:38.987640       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-998889-m03"
	I0804 01:31:06.800547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.04577ms"
	I0804 01:31:06.837340       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.713966ms"
	I0804 01:31:06.837476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.23µs"
	I0804 01:31:06.838216       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.963µs"
	I0804 01:31:06.841167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.278µs"
	I0804 01:31:06.947939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.599132ms"
	I0804 01:31:07.171116       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="222.926535ms"
	I0804 01:31:07.217963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.786157ms"
	I0804 01:31:07.218619       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.556µs"
	I0804 01:31:07.826554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.639µs"
	I0804 01:31:10.206264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.321251ms"
	I0804 01:31:10.206339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.205µs"
	I0804 01:31:10.531240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.132371ms"
	I0804 01:31:10.531328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.181µs"
	I0804 01:31:11.249118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.716992ms"
	I0804 01:31:11.249310       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.555µs"
	E0804 01:31:43.715288       1 certificate_controller.go:146] Sync csr-8dqlk failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-8dqlk": the object has been modified; please apply your changes to the latest version and try again
	I0804 01:31:43.964799       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-998889-m04\" does not exist"
	I0804 01:31:43.981389       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-998889-m04" podCIDRs=["10.244.3.0/24"]
	I0804 01:31:44.000233       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-998889-m04"
	I0804 01:32:04.866485       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-998889-m04"
	I0804 01:32:59.031401       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-998889-m04"
	I0804 01:32:59.137628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.081206ms"
	I0804 01:32:59.140330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.396µs"
	
	
	==> kube-proxy [e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372] <==
	I0804 01:28:30.963483       1 server_linux.go:69] "Using iptables proxy"
	I0804 01:28:30.980587       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.12"]
	I0804 01:28:31.031710       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 01:28:31.031766       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 01:28:31.031782       1 server_linux.go:165] "Using iptables Proxier"
	I0804 01:28:31.038022       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 01:28:31.038663       1 server.go:872] "Version info" version="v1.30.3"
	I0804 01:28:31.038747       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:28:31.040962       1 config.go:192] "Starting service config controller"
	I0804 01:28:31.041184       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 01:28:31.041290       1 config.go:101] "Starting endpoint slice config controller"
	I0804 01:28:31.041313       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 01:28:31.043474       1 config.go:319] "Starting node config controller"
	I0804 01:28:31.043567       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 01:28:31.141930       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 01:28:31.141960       1 shared_informer.go:320] Caches are synced for service config
	I0804 01:28:31.143968       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df] <==
	I0804 01:30:37.910750       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-rsp5h" node="ha-998889-m03"
	E0804 01:30:37.918545       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wj5z9\": pod kube-proxy-wj5z9 is already assigned to node \"ha-998889-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wj5z9" node="ha-998889-m03"
	E0804 01:30:37.919601       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 36f91407-7b5a-4101-b7a9-9adbf18a209f(kube-system/kube-proxy-wj5z9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wj5z9"
	E0804 01:30:37.919740       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wj5z9\": pod kube-proxy-wj5z9 is already assigned to node \"ha-998889-m03\"" pod="kube-system/kube-proxy-wj5z9"
	I0804 01:30:37.919824       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wj5z9" node="ha-998889-m03"
	E0804 01:31:06.770278       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8wnwt\": pod busybox-fc5497c4f-8wnwt is already assigned to node \"ha-998889-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-8wnwt" node="ha-998889-m03"
	E0804 01:31:06.770619       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 7668d0a2-3740-4ab0-aa7b-60b70fee82fc(default/busybox-fc5497c4f-8wnwt) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-8wnwt"
	E0804 01:31:06.770767       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8wnwt\": pod busybox-fc5497c4f-8wnwt is already assigned to node \"ha-998889-m03\"" pod="default/busybox-fc5497c4f-8wnwt"
	I0804 01:31:06.770966       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-8wnwt" node="ha-998889-m03"
	E0804 01:31:06.819451       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v468b\": pod busybox-fc5497c4f-v468b is already assigned to node \"ha-998889\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-v468b" node="ha-998889"
	E0804 01:31:06.819751       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c(default/busybox-fc5497c4f-v468b) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-v468b"
	E0804 01:31:06.819966       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-v468b\": pod busybox-fc5497c4f-v468b is already assigned to node \"ha-998889\"" pod="default/busybox-fc5497c4f-v468b"
	I0804 01:31:06.820439       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-v468b" node="ha-998889"
	E0804 01:31:44.050478       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5cv7z\": pod kindnet-5cv7z is already assigned to node \"ha-998889-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5cv7z" node="ha-998889-m04"
	E0804 01:31:44.050568       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6e18a7fd-57f2-4672-8c67-bde831c5fce7(kube-system/kindnet-5cv7z) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5cv7z"
	E0804 01:31:44.050600       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5cv7z\": pod kindnet-5cv7z is already assigned to node \"ha-998889-m04\"" pod="kube-system/kindnet-5cv7z"
	I0804 01:31:44.050635       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5cv7z" node="ha-998889-m04"
	E0804 01:31:44.051326       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-9qdn6\": pod kube-proxy-9qdn6 is already assigned to node \"ha-998889-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-9qdn6" node="ha-998889-m04"
	E0804 01:31:44.051400       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod aae55e56-e5f1-4ce0-9427-eaf1ae449bee(kube-system/kube-proxy-9qdn6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-9qdn6"
	E0804 01:31:44.051418       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-9qdn6\": pod kube-proxy-9qdn6 is already assigned to node \"ha-998889-m04\"" pod="kube-system/kube-proxy-9qdn6"
	I0804 01:31:44.051440       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-9qdn6" node="ha-998889-m04"
	E0804 01:31:44.221543       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-thr67\": pod kube-proxy-thr67 is already assigned to node \"ha-998889-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-thr67" node="ha-998889-m04"
	E0804 01:31:44.221899       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 777f50c1-032c-4f42-82e3-50a8bd8e1302(kube-system/kube-proxy-thr67) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-thr67"
	E0804 01:31:44.223222       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-thr67\": pod kube-proxy-thr67 is already assigned to node \"ha-998889-m04\"" pod="kube-system/kube-proxy-thr67"
	I0804 01:31:44.223375       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-thr67" node="ha-998889-m04"
	
	
	==> kubelet <==
	Aug 04 01:31:16 ha-998889 kubelet[1372]: E0804 01:31:16.429736    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:31:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:31:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:31:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:31:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:32:16 ha-998889 kubelet[1372]: E0804 01:32:16.426361    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:32:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:32:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:32:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:32:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:33:16 ha-998889 kubelet[1372]: E0804 01:33:16.440145    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:33:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:33:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:33:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:33:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:34:16 ha-998889 kubelet[1372]: E0804 01:34:16.431125    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:34:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:34:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:34:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:34:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:35:16 ha-998889 kubelet[1372]: E0804 01:35:16.427251    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:35:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:35:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:35:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:35:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-998889 -n ha-998889
helpers_test.go:261: (dbg) Run:  kubectl --context ha-998889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (57.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-998889 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-998889 -v=7 --alsologtostderr
E0804 01:36:42.265404   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:37:09.952038   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-998889 -v=7 --alsologtostderr: exit status 82 (2m1.919678389s)

                                                
                                                
-- stdout --
	* Stopping node "ha-998889-m04"  ...
	* Stopping node "ha-998889-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:35:48.899455  118350 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:35:48.899579  118350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:48.899588  118350 out.go:304] Setting ErrFile to fd 2...
	I0804 01:35:48.899592  118350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:35:48.899775  118350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:35:48.900057  118350 out.go:298] Setting JSON to false
	I0804 01:35:48.900161  118350 mustload.go:65] Loading cluster: ha-998889
	I0804 01:35:48.900578  118350 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:35:48.900673  118350 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:35:48.900888  118350 mustload.go:65] Loading cluster: ha-998889
	I0804 01:35:48.901075  118350 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:35:48.901123  118350 stop.go:39] StopHost: ha-998889-m04
	I0804 01:35:48.901562  118350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:48.901627  118350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:48.917386  118350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0804 01:35:48.917824  118350 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:48.918383  118350 main.go:141] libmachine: Using API Version  1
	I0804 01:35:48.918413  118350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:48.918764  118350 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:48.921264  118350 out.go:177] * Stopping node "ha-998889-m04"  ...
	I0804 01:35:48.922743  118350 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0804 01:35:48.922770  118350 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:35:48.923032  118350 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0804 01:35:48.923075  118350 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:35:48.925782  118350 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:48.926167  118350 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:31:29 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:35:48.926198  118350 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:35:48.926372  118350 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:35:48.926524  118350 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:35:48.926663  118350 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:35:48.926771  118350 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:35:49.012898  118350 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0804 01:35:49.066740  118350 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0804 01:35:49.120759  118350 main.go:141] libmachine: Stopping "ha-998889-m04"...
	I0804 01:35:49.120840  118350 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:35:49.122640  118350 main.go:141] libmachine: (ha-998889-m04) Calling .Stop
	I0804 01:35:49.126472  118350 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 0/120
	I0804 01:35:50.338191  118350 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:35:50.339613  118350 main.go:141] libmachine: Machine "ha-998889-m04" was stopped.
	I0804 01:35:50.339636  118350 stop.go:75] duration metric: took 1.416893087s to stop
	I0804 01:35:50.339663  118350 stop.go:39] StopHost: ha-998889-m03
	I0804 01:35:50.340005  118350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:35:50.340059  118350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:35:50.354903  118350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44045
	I0804 01:35:50.355392  118350 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:35:50.355850  118350 main.go:141] libmachine: Using API Version  1
	I0804 01:35:50.355875  118350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:35:50.356202  118350 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:35:50.359103  118350 out.go:177] * Stopping node "ha-998889-m03"  ...
	I0804 01:35:50.360341  118350 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0804 01:35:50.360381  118350 main.go:141] libmachine: (ha-998889-m03) Calling .DriverName
	I0804 01:35:50.360621  118350 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0804 01:35:50.360644  118350 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHHostname
	I0804 01:35:50.363828  118350 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:50.364299  118350 main.go:141] libmachine: (ha-998889-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:ff:5a", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:30:02 +0000 UTC Type:0 Mac:52:54:00:65:ff:5a Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-998889-m03 Clientid:01:52:54:00:65:ff:5a}
	I0804 01:35:50.364336  118350 main.go:141] libmachine: (ha-998889-m03) DBG | domain ha-998889-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:65:ff:5a in network mk-ha-998889
	I0804 01:35:50.364436  118350 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHPort
	I0804 01:35:50.364616  118350 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHKeyPath
	I0804 01:35:50.364752  118350 main.go:141] libmachine: (ha-998889-m03) Calling .GetSSHUsername
	I0804 01:35:50.364920  118350 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m03/id_rsa Username:docker}
	I0804 01:35:50.459861  118350 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0804 01:35:50.517137  118350 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0804 01:35:50.572796  118350 main.go:141] libmachine: Stopping "ha-998889-m03"...
	I0804 01:35:50.572830  118350 main.go:141] libmachine: (ha-998889-m03) Calling .GetState
	I0804 01:35:50.574416  118350 main.go:141] libmachine: (ha-998889-m03) Calling .Stop
	I0804 01:35:50.577900  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 0/120
	I0804 01:35:51.579387  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 1/120
	I0804 01:35:52.580933  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 2/120
	I0804 01:35:53.582524  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 3/120
	I0804 01:35:54.583966  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 4/120
	I0804 01:35:55.586156  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 5/120
	I0804 01:35:56.588322  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 6/120
	I0804 01:35:57.589927  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 7/120
	I0804 01:35:58.591544  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 8/120
	I0804 01:35:59.593179  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 9/120
	I0804 01:36:00.595528  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 10/120
	I0804 01:36:01.596967  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 11/120
	I0804 01:36:02.598467  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 12/120
	I0804 01:36:03.600319  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 13/120
	I0804 01:36:04.601858  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 14/120
	I0804 01:36:05.603573  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 15/120
	I0804 01:36:06.605010  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 16/120
	I0804 01:36:07.606349  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 17/120
	I0804 01:36:08.607768  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 18/120
	I0804 01:36:09.609350  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 19/120
	I0804 01:36:10.610984  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 20/120
	I0804 01:36:11.612471  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 21/120
	I0804 01:36:12.614263  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 22/120
	I0804 01:36:13.615995  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 23/120
	I0804 01:36:14.617385  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 24/120
	I0804 01:36:15.618953  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 25/120
	I0804 01:36:16.620438  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 26/120
	I0804 01:36:17.622260  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 27/120
	I0804 01:36:18.623933  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 28/120
	I0804 01:36:19.625420  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 29/120
	I0804 01:36:20.627898  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 30/120
	I0804 01:36:21.629742  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 31/120
	I0804 01:36:22.631389  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 32/120
	I0804 01:36:23.633106  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 33/120
	I0804 01:36:24.634759  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 34/120
	I0804 01:36:25.636827  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 35/120
	I0804 01:36:26.638250  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 36/120
	I0804 01:36:27.639760  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 37/120
	I0804 01:36:28.641225  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 38/120
	I0804 01:36:29.643544  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 39/120
	I0804 01:36:30.645682  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 40/120
	I0804 01:36:31.646944  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 41/120
	I0804 01:36:32.648201  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 42/120
	I0804 01:36:33.649559  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 43/120
	I0804 01:36:34.650947  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 44/120
	I0804 01:36:35.652805  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 45/120
	I0804 01:36:36.654165  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 46/120
	I0804 01:36:37.655397  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 47/120
	I0804 01:36:38.656831  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 48/120
	I0804 01:36:39.658204  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 49/120
	I0804 01:36:40.660160  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 50/120
	I0804 01:36:41.661437  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 51/120
	I0804 01:36:42.662720  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 52/120
	I0804 01:36:43.663988  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 53/120
	I0804 01:36:44.665409  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 54/120
	I0804 01:36:45.667106  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 55/120
	I0804 01:36:46.668582  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 56/120
	I0804 01:36:47.670143  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 57/120
	I0804 01:36:48.671851  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 58/120
	I0804 01:36:49.673265  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 59/120
	I0804 01:36:50.675002  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 60/120
	I0804 01:36:51.676385  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 61/120
	I0804 01:36:52.677735  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 62/120
	I0804 01:36:53.679016  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 63/120
	I0804 01:36:54.680389  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 64/120
	I0804 01:36:55.682214  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 65/120
	I0804 01:36:56.683585  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 66/120
	I0804 01:36:57.684953  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 67/120
	I0804 01:36:58.686335  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 68/120
	I0804 01:36:59.687811  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 69/120
	I0804 01:37:00.689313  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 70/120
	I0804 01:37:01.690595  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 71/120
	I0804 01:37:02.692042  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 72/120
	I0804 01:37:03.693382  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 73/120
	I0804 01:37:04.694962  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 74/120
	I0804 01:37:05.696964  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 75/120
	I0804 01:37:06.698232  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 76/120
	I0804 01:37:07.699793  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 77/120
	I0804 01:37:08.701128  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 78/120
	I0804 01:37:09.702572  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 79/120
	I0804 01:37:10.704347  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 80/120
	I0804 01:37:11.705855  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 81/120
	I0804 01:37:12.707253  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 82/120
	I0804 01:37:13.708601  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 83/120
	I0804 01:37:14.710240  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 84/120
	I0804 01:37:15.712224  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 85/120
	I0804 01:37:16.714257  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 86/120
	I0804 01:37:17.715916  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 87/120
	I0804 01:37:18.717541  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 88/120
	I0804 01:37:19.719961  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 89/120
	I0804 01:37:20.721600  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 90/120
	I0804 01:37:21.723040  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 91/120
	I0804 01:37:22.724349  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 92/120
	I0804 01:37:23.725732  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 93/120
	I0804 01:37:24.727238  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 94/120
	I0804 01:37:25.729265  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 95/120
	I0804 01:37:26.730651  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 96/120
	I0804 01:37:27.732085  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 97/120
	I0804 01:37:28.733387  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 98/120
	I0804 01:37:29.734777  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 99/120
	I0804 01:37:30.736347  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 100/120
	I0804 01:37:31.737805  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 101/120
	I0804 01:37:32.739067  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 102/120
	I0804 01:37:33.740374  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 103/120
	I0804 01:37:34.741740  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 104/120
	I0804 01:37:35.743393  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 105/120
	I0804 01:37:36.744845  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 106/120
	I0804 01:37:37.746321  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 107/120
	I0804 01:37:38.747871  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 108/120
	I0804 01:37:39.749152  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 109/120
	I0804 01:37:40.750950  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 110/120
	I0804 01:37:41.753253  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 111/120
	I0804 01:37:42.754916  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 112/120
	I0804 01:37:43.756414  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 113/120
	I0804 01:37:44.757851  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 114/120
	I0804 01:37:45.759246  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 115/120
	I0804 01:37:46.760571  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 116/120
	I0804 01:37:47.762273  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 117/120
	I0804 01:37:48.763742  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 118/120
	I0804 01:37:49.765242  118350 main.go:141] libmachine: (ha-998889-m03) Waiting for machine to stop 119/120
	I0804 01:37:50.765962  118350 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0804 01:37:50.766045  118350 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0804 01:37:50.768025  118350 out.go:177] 
	W0804 01:37:50.769414  118350 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0804 01:37:50.769435  118350 out.go:239] * 
	* 
	W0804 01:37:50.772454  118350 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 01:37:50.773810  118350 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-998889 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-998889 --wait=true -v=7 --alsologtostderr
E0804 01:41:42.265622   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-998889 --wait=true -v=7 --alsologtostderr: (4m1.764269992s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-998889
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-998889 -n ha-998889
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-998889 logs -n 25: (1.9837073s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m02:/home/docker/cp-test_ha-998889-m03_ha-998889-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m02 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04:/home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m04 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp testdata/cp-test.txt                                                | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1256674419/001/cp-test_ha-998889-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889:/home/docker/cp-test_ha-998889-m04_ha-998889.txt                       |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889 sudo cat                                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889.txt                                 |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m02:/home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m02 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03:/home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m03 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-998889 node stop m02 -v=7                                                     | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-998889 node start m02 -v=7                                                    | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-998889 -v=7                                                           | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-998889 -v=7                                                                | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-998889 --wait=true -v=7                                                    | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:37 UTC | 04 Aug 24 01:41 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-998889                                                                | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:41 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 01:37:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 01:37:50.819879  118832 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:37:50.820493  118832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:37:50.820511  118832 out.go:304] Setting ErrFile to fd 2...
	I0804 01:37:50.820518  118832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:37:50.821116  118832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:37:50.821721  118832 out.go:298] Setting JSON to false
	I0804 01:37:50.822684  118832 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12015,"bootTime":1722723456,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 01:37:50.822757  118832 start.go:139] virtualization: kvm guest
	I0804 01:37:50.825063  118832 out.go:177] * [ha-998889] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 01:37:50.826703  118832 notify.go:220] Checking for updates...
	I0804 01:37:50.826715  118832 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 01:37:50.828199  118832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 01:37:50.830086  118832 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:37:50.831545  118832 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:37:50.832847  118832 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 01:37:50.834196  118832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 01:37:50.835909  118832 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:37:50.836015  118832 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 01:37:50.836466  118832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:37:50.836542  118832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:37:50.851757  118832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0804 01:37:50.852171  118832 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:37:50.852780  118832 main.go:141] libmachine: Using API Version  1
	I0804 01:37:50.852812  118832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:37:50.853146  118832 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:37:50.853386  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:37:50.891026  118832 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 01:37:50.892241  118832 start.go:297] selected driver: kvm2
	I0804 01:37:50.892252  118832 start.go:901] validating driver "kvm2" against &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:37:50.892396  118832 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 01:37:50.892711  118832 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 01:37:50.892781  118832 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 01:37:50.907886  118832 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 01:37:50.908792  118832 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 01:37:50.908877  118832 cni.go:84] Creating CNI manager for ""
	I0804 01:37:50.908893  118832 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0804 01:37:50.908999  118832 start.go:340] cluster config:
	{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:37:50.909174  118832 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 01:37:50.911602  118832 out.go:177] * Starting "ha-998889" primary control-plane node in "ha-998889" cluster
	I0804 01:37:50.912806  118832 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:37:50.912836  118832 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 01:37:50.912845  118832 cache.go:56] Caching tarball of preloaded images
	I0804 01:37:50.912947  118832 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 01:37:50.912958  118832 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 01:37:50.913072  118832 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:37:50.913254  118832 start.go:360] acquireMachinesLock for ha-998889: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 01:37:50.913294  118832 start.go:364] duration metric: took 22.304µs to acquireMachinesLock for "ha-998889"
	I0804 01:37:50.913308  118832 start.go:96] Skipping create...Using existing machine configuration
	I0804 01:37:50.913316  118832 fix.go:54] fixHost starting: 
	I0804 01:37:50.913603  118832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:37:50.913648  118832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:37:50.928415  118832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0804 01:37:50.928816  118832 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:37:50.929287  118832 main.go:141] libmachine: Using API Version  1
	I0804 01:37:50.929313  118832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:37:50.929657  118832 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:37:50.929882  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:37:50.930069  118832 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:37:50.931578  118832 fix.go:112] recreateIfNeeded on ha-998889: state=Running err=<nil>
	W0804 01:37:50.931612  118832 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 01:37:50.933822  118832 out.go:177] * Updating the running kvm2 "ha-998889" VM ...
	I0804 01:37:50.935031  118832 machine.go:94] provisionDockerMachine start ...
	I0804 01:37:50.935048  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:37:50.935285  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:50.937566  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:50.938059  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:50.938085  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:50.938228  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:50.938413  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:50.938575  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:50.938709  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:50.938861  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:37:50.939095  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:37:50.939107  118832 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 01:37:51.050432  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889
	
	I0804 01:37:51.050473  118832 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:37:51.050739  118832 buildroot.go:166] provisioning hostname "ha-998889"
	I0804 01:37:51.050766  118832 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:37:51.050981  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:51.053799  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.054252  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.054279  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.054429  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:51.054594  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.054748  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.054924  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:51.055062  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:37:51.055246  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:37:51.055259  118832 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-998889 && echo "ha-998889" | sudo tee /etc/hostname
	I0804 01:37:51.183699  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889
	
	I0804 01:37:51.183724  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:51.186905  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.187333  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.187362  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.187566  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:51.187783  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.187975  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.188112  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:51.188295  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:37:51.188471  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:37:51.188486  118832 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-998889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-998889/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-998889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 01:37:51.298379  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:37:51.298433  118832 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 01:37:51.298465  118832 buildroot.go:174] setting up certificates
	I0804 01:37:51.298479  118832 provision.go:84] configureAuth start
	I0804 01:37:51.298495  118832 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:37:51.298857  118832 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:37:51.301447  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.301923  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.301953  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.302076  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:51.304734  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.305120  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.305154  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.305282  118832 provision.go:143] copyHostCerts
	I0804 01:37:51.305311  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:37:51.305347  118832 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 01:37:51.305420  118832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:37:51.305508  118832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 01:37:51.305607  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:37:51.305628  118832 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 01:37:51.305633  118832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:37:51.305657  118832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 01:37:51.305717  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:37:51.305733  118832 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 01:37:51.305737  118832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:37:51.305758  118832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 01:37:51.305816  118832 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.ha-998889 san=[127.0.0.1 192.168.39.12 ha-998889 localhost minikube]
	I0804 01:37:51.848379  118832 provision.go:177] copyRemoteCerts
	I0804 01:37:51.848444  118832 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 01:37:51.848474  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:51.850980  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.851287  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.851323  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.851431  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:51.851639  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.851806  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:51.851933  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:37:51.937451  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 01:37:51.937553  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0804 01:37:51.963185  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 01:37:51.963263  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 01:37:51.988852  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 01:37:51.988923  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 01:37:52.016309  118832 provision.go:87] duration metric: took 717.812724ms to configureAuth
	I0804 01:37:52.016340  118832 buildroot.go:189] setting minikube options for container-runtime
	I0804 01:37:52.016619  118832 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:37:52.016711  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:52.019061  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:52.019438  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:52.019464  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:52.019593  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:52.019781  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:52.019937  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:52.020101  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:52.020329  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:37:52.020542  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:37:52.020559  118832 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 01:39:22.919739  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 01:39:22.919777  118832 machine.go:97] duration metric: took 1m31.984732599s to provisionDockerMachine
	I0804 01:39:22.919797  118832 start.go:293] postStartSetup for "ha-998889" (driver="kvm2")
	I0804 01:39:22.919815  118832 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 01:39:22.919842  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:22.920221  118832 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 01:39:22.920252  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:22.923569  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:22.924009  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:22.924043  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:22.924213  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:22.924408  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:22.924578  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:22.924755  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:39:23.014066  118832 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 01:39:23.018775  118832 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 01:39:23.018808  118832 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 01:39:23.018873  118832 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 01:39:23.019007  118832 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 01:39:23.019025  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 01:39:23.019132  118832 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 01:39:23.029382  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:39:23.054834  118832 start.go:296] duration metric: took 135.020403ms for postStartSetup
	I0804 01:39:23.054878  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.055212  118832 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0804 01:39:23.055246  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:23.057917  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.058335  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.058362  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.058530  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:23.058696  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.058825  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:23.058931  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	W0804 01:39:23.144531  118832 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0804 01:39:23.144559  118832 fix.go:56] duration metric: took 1m32.231243673s for fixHost
	I0804 01:39:23.144582  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:23.147279  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.147638  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.147667  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.147799  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:23.148019  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.148180  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.148367  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:23.148559  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:39:23.148747  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:39:23.148760  118832 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 01:39:23.258458  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722735563.225823179
	
	I0804 01:39:23.258481  118832 fix.go:216] guest clock: 1722735563.225823179
	I0804 01:39:23.258488  118832 fix.go:229] Guest: 2024-08-04 01:39:23.225823179 +0000 UTC Remote: 2024-08-04 01:39:23.144567352 +0000 UTC m=+92.360079634 (delta=81.255827ms)
	I0804 01:39:23.258530  118832 fix.go:200] guest clock delta is within tolerance: 81.255827ms
	I0804 01:39:23.258538  118832 start.go:83] releasing machines lock for "ha-998889", held for 1m32.345235583s
	I0804 01:39:23.258558  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.258817  118832 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:39:23.261393  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.261856  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.261901  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.262061  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.262611  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.262797  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.262900  118832 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 01:39:23.262946  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:23.263071  118832 ssh_runner.go:195] Run: cat /version.json
	I0804 01:39:23.263100  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:23.265696  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.265834  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.266099  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.266138  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.266266  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:23.266287  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.266312  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.266441  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.266445  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:23.266622  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:23.266703  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.266780  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:39:23.266836  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:23.266969  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:39:23.365134  118832 ssh_runner.go:195] Run: systemctl --version
	I0804 01:39:23.371698  118832 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 01:39:23.533286  118832 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 01:39:23.542177  118832 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 01:39:23.542251  118832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 01:39:23.552285  118832 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 01:39:23.552323  118832 start.go:495] detecting cgroup driver to use...
	I0804 01:39:23.552410  118832 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 01:39:23.568759  118832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 01:39:23.582751  118832 docker.go:217] disabling cri-docker service (if available) ...
	I0804 01:39:23.582810  118832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 01:39:23.596566  118832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 01:39:23.610638  118832 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 01:39:23.762526  118832 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 01:39:23.912937  118832 docker.go:233] disabling docker service ...
	I0804 01:39:23.913016  118832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 01:39:23.930695  118832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 01:39:23.944819  118832 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 01:39:24.088982  118832 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 01:39:24.233893  118832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 01:39:24.248959  118832 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 01:39:24.268905  118832 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 01:39:24.268969  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.279582  118832 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 01:39:24.279655  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.290030  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.300992  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.311651  118832 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 01:39:24.322480  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.332847  118832 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.345218  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.356338  118832 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 01:39:24.366374  118832 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 01:39:24.376651  118832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:39:24.520490  118832 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 01:39:27.508474  118832 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.987941441s)
	I0804 01:39:27.508507  118832 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 01:39:27.508571  118832 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 01:39:27.514468  118832 start.go:563] Will wait 60s for crictl version
	I0804 01:39:27.514529  118832 ssh_runner.go:195] Run: which crictl
	I0804 01:39:27.518222  118832 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 01:39:27.561914  118832 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 01:39:27.561993  118832 ssh_runner.go:195] Run: crio --version
	I0804 01:39:27.592019  118832 ssh_runner.go:195] Run: crio --version
	I0804 01:39:27.623034  118832 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 01:39:27.624470  118832 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:39:27.626952  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:27.627301  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:27.627322  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:27.627554  118832 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 01:39:27.632760  118832 kubeadm.go:883] updating cluster {Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 01:39:27.632900  118832 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:39:27.632942  118832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 01:39:27.678362  118832 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 01:39:27.678386  118832 crio.go:433] Images already preloaded, skipping extraction
	I0804 01:39:27.678435  118832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 01:39:27.716304  118832 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 01:39:27.716329  118832 cache_images.go:84] Images are preloaded, skipping loading
	I0804 01:39:27.716342  118832 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.30.3 crio true true} ...
	I0804 01:39:27.716469  118832 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-998889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 01:39:27.716555  118832 ssh_runner.go:195] Run: crio config
	I0804 01:39:27.765435  118832 cni.go:84] Creating CNI manager for ""
	I0804 01:39:27.765464  118832 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0804 01:39:27.765477  118832 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 01:39:27.765507  118832 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-998889 NodeName:ha-998889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 01:39:27.765695  118832 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-998889"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 01:39:27.765725  118832 kube-vip.go:115] generating kube-vip config ...
	I0804 01:39:27.765779  118832 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 01:39:27.777478  118832 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 01:39:27.777604  118832 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0804 01:39:27.777673  118832 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 01:39:27.788015  118832 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 01:39:27.788090  118832 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0804 01:39:27.798706  118832 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0804 01:39:27.817002  118832 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 01:39:27.834875  118832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0804 01:39:27.852791  118832 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0804 01:39:27.871384  118832 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 01:39:27.875698  118832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:39:28.026791  118832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:39:28.041409  118832 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889 for IP: 192.168.39.12
	I0804 01:39:28.041432  118832 certs.go:194] generating shared ca certs ...
	I0804 01:39:28.041448  118832 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:39:28.041657  118832 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 01:39:28.041713  118832 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 01:39:28.041727  118832 certs.go:256] generating profile certs ...
	I0804 01:39:28.041824  118832 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key
	I0804 01:39:28.041859  118832 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.3756aa09
	I0804 01:39:28.041884  118832 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.3756aa09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.200 192.168.39.148 192.168.39.254]
	I0804 01:39:28.107335  118832 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.3756aa09 ...
	I0804 01:39:28.107371  118832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.3756aa09: {Name:mk8487245ed0129d14fed5abbd35e04bb8f4a32f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:39:28.107563  118832 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.3756aa09 ...
	I0804 01:39:28.107583  118832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.3756aa09: {Name:mk32e3f0283c85bf8bfebc6f456027cbc544d49f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:39:28.107695  118832 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.3756aa09 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt
	I0804 01:39:28.107879  118832 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.3756aa09 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key
	I0804 01:39:28.108072  118832 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key
	I0804 01:39:28.108091  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 01:39:28.108106  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 01:39:28.108121  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 01:39:28.108147  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 01:39:28.108164  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 01:39:28.108183  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 01:39:28.108208  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 01:39:28.108226  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 01:39:28.108288  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 01:39:28.108326  118832 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 01:39:28.108338  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 01:39:28.108379  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 01:39:28.108409  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 01:39:28.108444  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 01:39:28.108500  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:39:28.108536  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 01:39:28.108557  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:39:28.108574  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 01:39:28.109206  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 01:39:28.135802  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 01:39:28.160655  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 01:39:28.186312  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 01:39:28.210717  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0804 01:39:28.236019  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 01:39:28.290731  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 01:39:28.316103  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 01:39:28.341244  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 01:39:28.367717  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 01:39:28.392260  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 01:39:28.416664  118832 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 01:39:28.434059  118832 ssh_runner.go:195] Run: openssl version
	I0804 01:39:28.439998  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 01:39:28.450790  118832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 01:39:28.455535  118832 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 01:39:28.455583  118832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 01:39:28.461261  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 01:39:28.470432  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 01:39:28.480819  118832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:39:28.485222  118832 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:39:28.485268  118832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:39:28.490947  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 01:39:28.500081  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 01:39:28.510831  118832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 01:39:28.515405  118832 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 01:39:28.515453  118832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 01:39:28.521202  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 01:39:28.530690  118832 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 01:39:28.535560  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 01:39:28.541294  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 01:39:28.547235  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 01:39:28.552846  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 01:39:28.558925  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 01:39:28.564644  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 01:39:28.570087  118832 kubeadm.go:392] StartCluster: {Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:39:28.570254  118832 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 01:39:28.570323  118832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 01:39:28.611727  118832 cri.go:89] found id: "ec86579bf6c158df3821fb9dbec8faef8aa3d568dab1a5d1f7159056eb280795"
	I0804 01:39:28.611755  118832 cri.go:89] found id: "88e6ceb8a3a8cb99a438d980237741ca6d76b66be178c3e6ab3b64740e7b4725"
	I0804 01:39:28.611760  118832 cri.go:89] found id: "9689d7b18576bd7a530601f23fd61732e372c717c0773fbf8e9545eeea3f25ad"
	I0804 01:39:28.611763  118832 cri.go:89] found id: "7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947"
	I0804 01:39:28.611766  118832 cri.go:89] found id: "fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9"
	I0804 01:39:28.611769  118832 cri.go:89] found id: "426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184"
	I0804 01:39:28.611771  118832 cri.go:89] found id: "e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957"
	I0804 01:39:28.611774  118832 cri.go:89] found id: "e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372"
	I0804 01:39:28.611776  118832 cri.go:89] found id: "95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a"
	I0804 01:39:28.611781  118832 cri.go:89] found id: "cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6"
	I0804 01:39:28.611783  118832 cri.go:89] found id: "3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df"
	I0804 01:39:28.611799  118832 cri.go:89] found id: "0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977"
	I0804 01:39:28.611801  118832 cri.go:89] found id: "8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318"
	I0804 01:39:28.611803  118832 cri.go:89] found id: ""
	I0804 01:39:28.611848  118832 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.360275572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735713360249932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bbd5699-7cd8-4f0c-aab5-b110d04c4a9c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.360969586Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=413072ee-dec8-4c38-b24e-1f116ad540ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.361030010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=413072ee-dec8-4c38-b24e-1f116ad540ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.361478722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2769cff2a2b2d4825012559bed9bb50af3c2f39380afc7356e8d0a6b6f3eb218,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722735651405627687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722735616432668420,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722735612413184013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:183f6a6f77331a1fb20eeae57c71ce1dec8f350f0fe0c423c6fe4dbde357ccfe,PodSandboxId:5d68c8d7e7c12618843997c81fb5620722085b8e43a585772cdcad0ecacfaf1e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735607741527540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722735605395515152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b434b44f1bf118b16a5b0a2fad732e246821c6d24e8f7e96a958348f6d2d2913,PodSandboxId:3f4471219e95e097c42916bb1033bb9b290dc8ed46552ad064046c11f5d7e35a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722735585880347092,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a93626eb8196dbb6199516a79b5b7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5,PodSandboxId:88980a4edc1a46ad05e16b741205e29e7110029806cc4d56796ac5fe8e94424e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722735574554530787,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12,PodSandboxId:3aadcc23e0102801456d04054c7c1db54a4f44806fcd9f3b88246684b01da8fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722735574614082607,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb87ebf
1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b,PodSandboxId:44a59ecb458198d259fe1bc852518aaef857bcd8368cfd374a478318abcb3692,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574600582082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7,PodSandboxId:71b522b8398227d22bd4d75fac6b504d7eeb12c43833008d488abcee3fc98e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574550208095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722735574254077775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8,PodSandboxId:97fe4c22a42659dd60cfc446982ac2a1fac81004c41636bc641046253cc77bc9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722735574346146884,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722735574328643003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d,PodSandboxId:74c4aa5b4cf9edbdb0d3e0eb8df0a845a9135b39424b43f148d65609cdb147cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722735574272482708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Ann
otations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722735070152369221,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927898145711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927839045470,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722734915708486914,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722734910732550281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722734890252434611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722734890219676608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=413072ee-dec8-4c38-b24e-1f116ad540ba name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.407654422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96d80daa-a095-42eb-8186-4ce39f4e1c70 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.407740915Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96d80daa-a095-42eb-8186-4ce39f4e1c70 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.409390178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02f35c50-10b5-45fb-985e-554bc9779c22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.410218130Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735713410192346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02f35c50-10b5-45fb-985e-554bc9779c22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.410927522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da426957-9776-4f9f-811c-d976e7b00c47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.411003508Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da426957-9776-4f9f-811c-d976e7b00c47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.411434175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2769cff2a2b2d4825012559bed9bb50af3c2f39380afc7356e8d0a6b6f3eb218,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722735651405627687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722735616432668420,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722735612413184013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:183f6a6f77331a1fb20eeae57c71ce1dec8f350f0fe0c423c6fe4dbde357ccfe,PodSandboxId:5d68c8d7e7c12618843997c81fb5620722085b8e43a585772cdcad0ecacfaf1e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735607741527540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722735605395515152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b434b44f1bf118b16a5b0a2fad732e246821c6d24e8f7e96a958348f6d2d2913,PodSandboxId:3f4471219e95e097c42916bb1033bb9b290dc8ed46552ad064046c11f5d7e35a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722735585880347092,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a93626eb8196dbb6199516a79b5b7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5,PodSandboxId:88980a4edc1a46ad05e16b741205e29e7110029806cc4d56796ac5fe8e94424e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722735574554530787,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12,PodSandboxId:3aadcc23e0102801456d04054c7c1db54a4f44806fcd9f3b88246684b01da8fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722735574614082607,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb87ebf
1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b,PodSandboxId:44a59ecb458198d259fe1bc852518aaef857bcd8368cfd374a478318abcb3692,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574600582082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7,PodSandboxId:71b522b8398227d22bd4d75fac6b504d7eeb12c43833008d488abcee3fc98e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574550208095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722735574254077775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8,PodSandboxId:97fe4c22a42659dd60cfc446982ac2a1fac81004c41636bc641046253cc77bc9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722735574346146884,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722735574328643003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d,PodSandboxId:74c4aa5b4cf9edbdb0d3e0eb8df0a845a9135b39424b43f148d65609cdb147cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722735574272482708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Ann
otations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722735070152369221,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927898145711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927839045470,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722734915708486914,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722734910732550281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722734890252434611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722734890219676608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da426957-9776-4f9f-811c-d976e7b00c47 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.471236289Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6b986d3-97a9-4099-bad5-7c7e39c95bf5 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.471314855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6b986d3-97a9-4099-bad5-7c7e39c95bf5 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.472243135Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93b7f314-4f54-43bb-8dca-9d50174d4f59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.472954759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735713472916764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93b7f314-4f54-43bb-8dca-9d50174d4f59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.473816521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f12b7143-fc36-4580-a500-fa311f85362f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.473914743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f12b7143-fc36-4580-a500-fa311f85362f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.474342915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2769cff2a2b2d4825012559bed9bb50af3c2f39380afc7356e8d0a6b6f3eb218,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722735651405627687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722735616432668420,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722735612413184013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:183f6a6f77331a1fb20eeae57c71ce1dec8f350f0fe0c423c6fe4dbde357ccfe,PodSandboxId:5d68c8d7e7c12618843997c81fb5620722085b8e43a585772cdcad0ecacfaf1e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735607741527540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722735605395515152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b434b44f1bf118b16a5b0a2fad732e246821c6d24e8f7e96a958348f6d2d2913,PodSandboxId:3f4471219e95e097c42916bb1033bb9b290dc8ed46552ad064046c11f5d7e35a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722735585880347092,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a93626eb8196dbb6199516a79b5b7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5,PodSandboxId:88980a4edc1a46ad05e16b741205e29e7110029806cc4d56796ac5fe8e94424e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722735574554530787,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12,PodSandboxId:3aadcc23e0102801456d04054c7c1db54a4f44806fcd9f3b88246684b01da8fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722735574614082607,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb87ebf
1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b,PodSandboxId:44a59ecb458198d259fe1bc852518aaef857bcd8368cfd374a478318abcb3692,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574600582082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7,PodSandboxId:71b522b8398227d22bd4d75fac6b504d7eeb12c43833008d488abcee3fc98e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574550208095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722735574254077775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8,PodSandboxId:97fe4c22a42659dd60cfc446982ac2a1fac81004c41636bc641046253cc77bc9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722735574346146884,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722735574328643003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d,PodSandboxId:74c4aa5b4cf9edbdb0d3e0eb8df0a845a9135b39424b43f148d65609cdb147cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722735574272482708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Ann
otations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722735070152369221,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927898145711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927839045470,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722734915708486914,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722734910732550281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722734890252434611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722734890219676608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f12b7143-fc36-4580-a500-fa311f85362f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.523446952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad62f802-d4c5-46e2-a5a9-f5d9c0af1591 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.523541349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad62f802-d4c5-46e2-a5a9-f5d9c0af1591 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.525130407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e378fc6f-6817-49e9-96e8-cdc16f08f4dd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.525741018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735713525707627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e378fc6f-6817-49e9-96e8-cdc16f08f4dd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.526927588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcfc0e53-04bb-4d14-8da5-c3e6129633db name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.527127683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcfc0e53-04bb-4d14-8da5-c3e6129633db name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:41:53 ha-998889 crio[3722]: time="2024-08-04 01:41:53.528342219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2769cff2a2b2d4825012559bed9bb50af3c2f39380afc7356e8d0a6b6f3eb218,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722735651405627687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722735616432668420,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722735612413184013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:183f6a6f77331a1fb20eeae57c71ce1dec8f350f0fe0c423c6fe4dbde357ccfe,PodSandboxId:5d68c8d7e7c12618843997c81fb5620722085b8e43a585772cdcad0ecacfaf1e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735607741527540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722735605395515152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b434b44f1bf118b16a5b0a2fad732e246821c6d24e8f7e96a958348f6d2d2913,PodSandboxId:3f4471219e95e097c42916bb1033bb9b290dc8ed46552ad064046c11f5d7e35a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722735585880347092,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a93626eb8196dbb6199516a79b5b7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5,PodSandboxId:88980a4edc1a46ad05e16b741205e29e7110029806cc4d56796ac5fe8e94424e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722735574554530787,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12,PodSandboxId:3aadcc23e0102801456d04054c7c1db54a4f44806fcd9f3b88246684b01da8fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722735574614082607,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb87ebf
1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b,PodSandboxId:44a59ecb458198d259fe1bc852518aaef857bcd8368cfd374a478318abcb3692,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574600582082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7,PodSandboxId:71b522b8398227d22bd4d75fac6b504d7eeb12c43833008d488abcee3fc98e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574550208095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722735574254077775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8,PodSandboxId:97fe4c22a42659dd60cfc446982ac2a1fac81004c41636bc641046253cc77bc9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722735574346146884,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722735574328643003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d,PodSandboxId:74c4aa5b4cf9edbdb0d3e0eb8df0a845a9135b39424b43f148d65609cdb147cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722735574272482708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Ann
otations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722735070152369221,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927898145711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927839045470,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722734915708486914,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722734910732550281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722734890252434611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722734890219676608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcfc0e53-04bb-4d14-8da5-c3e6129633db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2769cff2a2b2d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   b8b3aa4054b5b       storage-provisioner
	abbde27610f2f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   923dc81c82e1e       kube-controller-manager-ha-998889
	ae15bd3bdf8b5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   ca146213b85b7       kube-apiserver-ha-998889
	183f6a6f77331       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   5d68c8d7e7c12       busybox-fc5497c4f-v468b
	011410390b0d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   b8b3aa4054b5b       storage-provisioner
	b434b44f1bf11       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   3f4471219e95e       kube-vip-ha-998889
	316140ed791e6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   3aadcc23e0102       kindnet-gc22h
	2cb87ebf1b446       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   44a59ecb45819       coredns-7db6d8ff4d-b8ds7
	11f3f2daf285f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   88980a4edc1a4       kube-proxy-56twz
	e84d839561d00       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   71b522b839822       coredns-7db6d8ff4d-ddb5m
	2d5d507e714a2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   97fe4c22a4265       kube-scheduler-ha-998889
	1dd3b269ecfda       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   923dc81c82e1e       kube-controller-manager-ha-998889
	9f5483ec0343a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   74c4aa5b4cf9e       etcd-ha-998889
	1460950dd5a80       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   ca146213b85b7       kube-apiserver-ha-998889
	1bb7230a66693       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   5b4550fd8d43d       busybox-fc5497c4f-v468b
	7ce1fc9d2ceb3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   3037e05c8f0db       coredns-7db6d8ff4d-b8ds7
	fe75909603216       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   a3cc1795993d6       coredns-7db6d8ff4d-ddb5m
	e987e973e97a5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   120c9a2eb52aa       kindnet-gc22h
	e32fb23a61d2d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   9689d6db72b02       kube-proxy-56twz
	cbd934bafbbf1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   580e42f37b240       etcd-ha-998889
	3f264e5c2143d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   c25b0800264cf       kube-scheduler-ha-998889
	
	
	==> coredns [2cb87ebf1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b] <==
	[INFO] plugin/kubernetes: Trace[897242627]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 01:39:43.902) (total time: 10001ms):
	Trace[897242627]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:39:53.903)
	Trace[897242627]: [10.00125379s] [10.00125379s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[555702147]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 01:39:46.341) (total time: 10723ms):
	Trace[555702147]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39240->10.96.0.1:443: read: connection reset by peer 10722ms (01:39:57.064)
	Trace[555702147]: [10.723707768s] [10.723707768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36692->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36692->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947] <==
	[INFO] 10.244.1.2:54493 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154283s
	[INFO] 10.244.1.2:45366 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000188537s
	[INFO] 10.244.1.2:42179 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223485s
	[INFO] 10.244.2.2:48925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000257001s
	[INFO] 10.244.2.2:46133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001441239s
	[INFO] 10.244.2.2:40620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108193s
	[INFO] 10.244.2.2:45555 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071897s
	[INFO] 10.244.0.4:57133 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007622s
	[INFO] 10.244.0.4:45128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012024s
	[INFO] 10.244.0.4:33660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084733s
	[INFO] 10.244.1.2:48368 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133283s
	[INFO] 10.244.1.2:42909 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130327s
	[INFO] 10.244.1.2:54181 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067193s
	[INFO] 10.244.2.2:36881 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125847s
	[INFO] 10.244.2.2:52948 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090317s
	[INFO] 10.244.1.2:34080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132803s
	[INFO] 10.244.1.2:38625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147078s
	[INFO] 10.244.2.2:41049 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205078s
	[INFO] 10.244.2.2:47520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094037s
	[INFO] 10.244.2.2:48004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000211339s
	[INFO] 10.244.0.4:52706 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087998s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1942&timeout=6m30s&timeoutSeconds=390&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1948&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1648200618]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 01:39:39.797) (total time: 10001ms):
	Trace[1648200618]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:39:49.799)
	Trace[1648200618]: [10.001816969s] [10.001816969s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34500->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34500->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50406->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50406->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9] <==
	[INFO] 10.244.2.2:43384 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001982538s
	[INFO] 10.244.2.2:59450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165578s
	[INFO] 10.244.2.2:44599 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132406s
	[INFO] 10.244.2.2:38280 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086968s
	[INFO] 10.244.0.4:52340 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111664s
	[INFO] 10.244.0.4:55794 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001989197s
	[INFO] 10.244.0.4:56345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371219s
	[INFO] 10.244.0.4:50778 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090371s
	[INFO] 10.244.0.4:47116 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132729s
	[INFO] 10.244.1.2:54780 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104255s
	[INFO] 10.244.2.2:52086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092312s
	[INFO] 10.244.2.2:36096 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008133s
	[INFO] 10.244.0.4:35645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084037s
	[INFO] 10.244.0.4:57031 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00004652s
	[INFO] 10.244.0.4:53264 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005834s
	[INFO] 10.244.0.4:52476 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111362s
	[INFO] 10.244.1.2:39754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000161853s
	[INFO] 10.244.1.2:44320 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018965s
	[INFO] 10.244.2.2:58250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133355s
	[INFO] 10.244.0.4:34248 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137551s
	[INFO] 10.244.0.4:46858 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082831s
	[INFO] 10.244.0.4:52801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017483s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1942&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-998889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T01_28_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:28:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:41:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:40:15 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:40:15 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:40:15 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:40:15 +0000   Sun, 04 Aug 2024 01:28:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-998889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa9bfc18a8dd4a25ae5d0b652cb98f91
	  System UUID:                fa9bfc18-a8dd-4a25-ae5d-0b652cb98f91
	  Boot ID:                    ddede9e4-4547-41a5-820a-f6568caf06a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v468b              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-b8ds7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-ddb5m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-998889                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-gc22h                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-998889             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-998889    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-56twz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-998889             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-998889                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 97s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-998889 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-998889 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-998889 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-998889 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-998889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-998889 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-998889 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Warning  ContainerGCFailed        2m37s (x2 over 3m37s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           88s                    node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal   RegisteredNode           86s                    node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal   RegisteredNode           28s                    node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	
	
	Name:               ha-998889-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_29_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:29:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:41:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:40:56 +0000   Sun, 04 Aug 2024 01:40:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:40:56 +0000   Sun, 04 Aug 2024 01:40:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:40:56 +0000   Sun, 04 Aug 2024 01:40:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:40:56 +0000   Sun, 04 Aug 2024 01:40:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    ha-998889-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8754ed7ba6c04d5d808bf540e4c5a093
	  System UUID:                8754ed7b-a6c0-4d5d-808b-f540e4c5a093
	  Boot ID:                    f010620e-c28e-4dfd-9fd8-683c4880bba4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7jqps                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-998889-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-mm9t2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-998889-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-998889-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-v4j77                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-998889-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-998889-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-998889-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-998889-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-998889-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  NodeNotReady             8m55s                node-controller  Node ha-998889-m02 status is now: NodeNotReady
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m1s (x7 over 2m1s)  kubelet          Node ha-998889-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m1s)    kubelet          Node ha-998889-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m1s)    kubelet          Node ha-998889-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           89s                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           87s                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           29s                  node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	
	
	Name:               ha-998889-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_30_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:30:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:41:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:41:26 +0000   Sun, 04 Aug 2024 01:30:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:41:26 +0000   Sun, 04 Aug 2024 01:30:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:41:26 +0000   Sun, 04 Aug 2024 01:30:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:41:26 +0000   Sun, 04 Aug 2024 01:30:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-998889-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 49ee34ab17a14b2ba68118c94f92f005
	  System UUID:                49ee34ab-17a1-4b2b-a681-18c94f92f005
	  Boot ID:                    b37cba94-a909-4bd9-9f66-eee00a712fc6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8wnwt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-998889-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-rsp5h                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-998889-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-998889-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-wj5z9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-998889-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-998889-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-998889-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-998889-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-998889-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	  Normal   RegisteredNode           89s                node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node ha-998889-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node ha-998889-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node ha-998889-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s                kubelet          Node ha-998889-m03 has been rebooted, boot id: b37cba94-a909-4bd9-9f66-eee00a712fc6
	  Normal   RegisteredNode           29s                node-controller  Node ha-998889-m03 event: Registered Node ha-998889-m03 in Controller
	
	
	Name:               ha-998889-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_31_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:31:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:41:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:41:45 +0000   Sun, 04 Aug 2024 01:41:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:41:45 +0000   Sun, 04 Aug 2024 01:41:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:41:45 +0000   Sun, 04 Aug 2024 01:41:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:41:45 +0000   Sun, 04 Aug 2024 01:41:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-998889-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e86557b9788446aca3bd64c7bcc82957
	  System UUID:                e86557b9-7884-46ac-a3bd-64c7bcc82957
	  Boot ID:                    cd38eada-249f-443d-b928-f87347c45a30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5cv7z       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-9qdn6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-998889-m04 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-998889-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-998889-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   NodeReady                9m50s              kubelet          Node ha-998889-m04 status is now: NodeReady
	  Normal   RegisteredNode           89s                node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   NodeNotReady             49s                node-controller  Node ha-998889-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           29s                node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-998889-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-998889-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-998889-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-998889-m04 has been rebooted, boot id: cd38eada-249f-443d-b928-f87347c45a30
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-998889-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.869407] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.063774] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058921] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.163748] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.144819] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.274744] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[Aug 4 01:28] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +0.067193] kauditd_printk_skb: 136 callbacks suppressed
	[  +4.231084] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.024644] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.031121] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.102027] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.498623] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.120089] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 4 01:29] kauditd_printk_skb: 26 callbacks suppressed
	[Aug 4 01:39] systemd-fstab-generator[3640]: Ignoring "noauto" option for root device
	[  +0.155417] systemd-fstab-generator[3652]: Ignoring "noauto" option for root device
	[  +0.177356] systemd-fstab-generator[3666]: Ignoring "noauto" option for root device
	[  +0.143690] systemd-fstab-generator[3678]: Ignoring "noauto" option for root device
	[  +0.288412] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +3.506403] systemd-fstab-generator[3809]: Ignoring "noauto" option for root device
	[  +5.906362] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.013351] kauditd_printk_skb: 86 callbacks suppressed
	[Aug 4 01:40] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.628974] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d] <==
	{"level":"warn","ts":"2024-08-04T01:40:54.930609Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.148:2380/version","remote-member-id":"7f4b3c159583e07e","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:40:54.930662Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7f4b3c159583e07e","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:40:55.27926Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:40:55.279347Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:40:58.93241Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.148:2380/version","remote-member-id":"7f4b3c159583e07e","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:40:58.932572Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7f4b3c159583e07e","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:00.279916Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:00.280014Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:02.934662Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.148:2380/version","remote-member-id":"7f4b3c159583e07e","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:02.934806Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7f4b3c159583e07e","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:05.280271Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:05.280316Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:06.936933Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.148:2380/version","remote-member-id":"7f4b3c159583e07e","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:06.937069Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"7f4b3c159583e07e","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-04T01:41:07.939584Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:41:07.952705Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:41:07.95394Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"7f4b3c159583e07e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-04T01:41:07.954088Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:41:07.954008Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:41:07.962593Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"7f4b3c159583e07e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-04T01:41:07.962669Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"warn","ts":"2024-08-04T01:41:10.280887Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:10.281156Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:11.347525Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.149423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-998889-m03\" ","response":"range_response_count:1 size:6894"}
	{"level":"info","ts":"2024-08-04T01:41:11.347718Z","caller":"traceutil/trace.go:171","msg":"trace[432126751] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-998889-m03; range_end:; response_count:1; response_revision:2415; }","duration":"107.393178ms","start":"2024-08-04T01:41:11.240281Z","end":"2024-08-04T01:41:11.347674Z","steps":["trace[432126751] 'agreement among raft nodes before linearized reading'  (duration: 66.185051ms)","trace[432126751] 'range keys from in-memory index tree'  (duration: 40.904373ms)"],"step_count":2}
	
	
	==> etcd [cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6] <==
	2024/08/04 01:37:52 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-04T01:37:52.165281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.916641232s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-04T01:37:52.188582Z","caller":"traceutil/trace.go:171","msg":"trace[1810148600] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"8.939940048s","start":"2024-08-04T01:37:43.248636Z","end":"2024-08-04T01:37:52.188576Z","steps":["trace[1810148600] 'agreement among raft nodes before linearized reading'  (duration: 8.916641418s)"],"step_count":1}
	2024/08/04 01:37:52 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-04T01:37:52.201182Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1349832058482900657,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-04T01:37:52.313333Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.12:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T01:37:52.313436Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.12:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T01:37:52.313513Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ab0e927fe14112bb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-04T01:37:52.313732Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.313773Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.313815Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.313984Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.31411Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.314216Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.314249Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.314259Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.314272Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.31431Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.31441Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.31449Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.314566Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.314595Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.318136Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2024-08-04T01:37:52.318247Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2024-08-04T01:37:52.318285Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-998889","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.12:2380"],"advertise-client-urls":["https://192.168.39.12:2379"]}
	
	
	==> kernel <==
	 01:41:54 up 14 min,  0 users,  load average: 0.54, 0.50, 0.32
	Linux ha-998889 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12] <==
	I0804 01:41:15.821554       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:41:25.821120       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:41:25.821254       1 main.go:299] handling current node
	I0804 01:41:25.821299       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:41:25.821327       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:41:25.821503       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:41:25.821540       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:41:25.821658       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:41:25.821694       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:41:35.820677       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:41:35.820727       1 main.go:299] handling current node
	I0804 01:41:35.820741       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:41:35.820747       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:41:35.821011       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:41:35.821039       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:41:35.821093       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:41:35.821115       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:41:45.820953       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:41:45.821019       1 main.go:299] handling current node
	I0804 01:41:45.821039       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:41:45.821046       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:41:45.821214       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:41:45.821246       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:41:45.821312       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:41:45.821337       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957] <==
	I0804 01:37:26.892070       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:37:26.892089       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:37:26.892293       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:37:26.892320       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:37:26.892381       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:37:26.892400       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:37:36.892166       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:37:36.892193       1 main.go:299] handling current node
	I0804 01:37:36.892206       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:37:36.892210       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:37:36.892398       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:37:36.892405       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:37:36.892480       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:37:36.892485       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:37:46.892258       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:37:46.892313       1 main.go:299] handling current node
	I0804 01:37:46.892328       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:37:46.892334       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:37:46.892470       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:37:46.892493       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:37:46.892552       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:37:46.892557       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	E0804 01:37:47.235469       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1911&timeout=6m31s&timeoutSeconds=391&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	W0804 01:37:50.307465       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1911": dial tcp 10.96.0.1:443: connect: no route to host
	E0804 01:37:50.307552       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1911": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864] <==
	I0804 01:39:35.140393       1 options.go:221] external host was not specified, using 192.168.39.12
	I0804 01:39:35.142993       1 server.go:148] Version: v1.30.3
	I0804 01:39:35.143039       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:39:36.042285       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0804 01:39:36.051940       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 01:39:36.057009       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0804 01:39:36.057042       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 01:39:36.057301       1 instance.go:299] Using reconciler: lease
	W0804 01:39:56.040102       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0804 01:39:56.040102       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0804 01:39:56.058385       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3] <==
	I0804 01:40:14.802342       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0804 01:40:14.802803       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0804 01:40:14.802918       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0804 01:40:14.884713       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 01:40:14.885805       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 01:40:14.886624       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 01:40:14.887110       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 01:40:14.889653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 01:40:14.900125       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0804 01:40:14.903111       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.200]
	I0804 01:40:14.903214       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 01:40:14.903397       1 aggregator.go:165] initial CRD sync complete...
	I0804 01:40:14.903431       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 01:40:14.903454       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 01:40:14.903475       1 cache.go:39] Caches are synced for autoregister controller
	I0804 01:40:14.923257       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 01:40:14.929797       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 01:40:14.929878       1 policy_source.go:224] refreshing policies
	I0804 01:40:14.985135       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 01:40:14.987904       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 01:40:15.004324       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 01:40:15.043047       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0804 01:40:15.050743       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0804 01:40:15.799097       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0804 01:40:16.179134       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.200]
	
	
	==> kube-controller-manager [1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e] <==
	I0804 01:39:35.694412       1 serving.go:380] Generated self-signed cert in-memory
	I0804 01:39:36.195307       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0804 01:39:36.195466       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:39:36.197138       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0804 01:39:36.197264       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 01:39:36.197291       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 01:39:36.197317       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0804 01:39:57.065244       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.12:8443/healthz\": dial tcp 192.168.39.12:8443: connect: connection refused"
	
	
	==> kube-controller-manager [abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d] <==
	I0804 01:40:27.833609       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0804 01:40:27.901751       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0804 01:40:27.902048       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.845µs"
	I0804 01:40:27.902050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="211.13µs"
	I0804 01:40:27.905585       1 shared_informer.go:320] Caches are synced for deployment
	I0804 01:40:27.906143       1 shared_informer.go:320] Caches are synced for disruption
	I0804 01:40:27.993424       1 shared_informer.go:320] Caches are synced for HPA
	I0804 01:40:28.017611       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 01:40:28.061056       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 01:40:28.449150       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 01:40:28.504639       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 01:40:28.504675       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 01:40:37.603207       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-wspkf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-wspkf\": the object has been modified; please apply your changes to the latest version and try again"
	I0804 01:40:37.603588       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f305211a-f7cb-4e9b-aeb4-24f589f6832b", APIVersion:"v1", ResourceVersion:"290", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-wspkf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-wspkf": the object has been modified; please apply your changes to the latest version and try again
	I0804 01:40:37.623072       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-wspkf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-wspkf\": the object has been modified; please apply your changes to the latest version and try again"
	I0804 01:40:37.623202       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f305211a-f7cb-4e9b-aeb4-24f589f6832b", APIVersion:"v1", ResourceVersion:"290", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-wspkf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-wspkf": the object has been modified; please apply your changes to the latest version and try again
	I0804 01:40:37.661079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="105.050625ms"
	I0804 01:40:37.661199       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.659µs"
	I0804 01:40:37.679813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.46896ms"
	I0804 01:40:37.680751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="210.159µs"
	I0804 01:40:56.650712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.190612ms"
	I0804 01:40:56.651467       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.836µs"
	I0804 01:41:16.004495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.178122ms"
	I0804 01:41:16.005543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.233µs"
	I0804 01:41:45.583074       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-998889-m04"
	
	
	==> kube-proxy [11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5] <==
	I0804 01:39:36.073777       1 server_linux.go:69] "Using iptables proxy"
	E0804 01:39:36.547438       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:39:39.619434       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:39:42.692275       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:39:48.836604       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:39:58.051918       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:40:16.486372       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0804 01:40:16.486481       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0804 01:40:16.626901       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 01:40:16.627024       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 01:40:16.627046       1 server_linux.go:165] "Using iptables Proxier"
	I0804 01:40:16.634050       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 01:40:16.635289       1 server.go:872] "Version info" version="v1.30.3"
	I0804 01:40:16.635402       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:40:16.649144       1 config.go:192] "Starting service config controller"
	I0804 01:40:16.649216       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 01:40:16.649328       1 config.go:101] "Starting endpoint slice config controller"
	I0804 01:40:16.649338       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 01:40:16.650505       1 config.go:319] "Starting node config controller"
	I0804 01:40:16.650514       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 01:40:16.749754       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 01:40:16.749816       1 shared_informer.go:320] Caches are synced for service config
	I0804 01:40:16.751375       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372] <==
	E0804 01:36:23.925011       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:30.563553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:30.563810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:30.564171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:30.564276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:30.564490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:30.564593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:41.443627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:41.443935       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:44.515293       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:44.515361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:44.515476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:44.515535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:59.875533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:59.875806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:02.947571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:02.948594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:06.019341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:06.019570       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:36.739749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:36.739948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:39.812255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:39.812537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:49.028623       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:49.028718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8] <==
	W0804 01:40:06.174603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:06.174676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.12:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:06.411831       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.12:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:06.411989       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.12:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:06.585345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:06.585456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:06.974066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.12:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:06.974109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.12:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:07.932683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:07.932743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:11.719404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:11.719496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:14.815816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 01:40:14.817352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0804 01:40:14.817556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 01:40:14.817697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 01:40:14.817905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 01:40:14.817947       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0804 01:40:14.818044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 01:40:14.818072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0804 01:40:14.818108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0804 01:40:14.818133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0804 01:40:14.818243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 01:40:14.818273       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0804 01:40:16.276319       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df] <==
	E0804 01:37:48.066956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 01:37:48.334521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0804 01:37:48.334582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0804 01:37:48.485519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0804 01:37:48.485569       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0804 01:37:50.493761       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0804 01:37:50.493814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0804 01:37:50.903135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 01:37:50.903243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 01:37:51.094003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0804 01:37:51.094104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0804 01:37:51.679039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 01:37:51.679140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0804 01:37:52.079787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 01:37:52.079819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0804 01:37:52.084995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0804 01:37:52.085021       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0804 01:37:52.096155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 01:37:52.096204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 01:37:52.118907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 01:37:52.118971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0804 01:37:52.149451       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0804 01:37:52.150228       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0804 01:37:52.154811       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0804 01:37:52.156269       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 01:40:07 ha-998889 kubelet[1372]: E0804 01:40:07.268545    1372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-998889?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Aug 04 01:40:10 ha-998889 kubelet[1372]: I0804 01:40:10.339307    1372 status_manager.go:853] "Failed to get status for pod" podUID="afa070e1274a0587ba8559359cd730bd" pod="kube-system/kube-apiserver-ha-998889" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 04 01:40:10 ha-998889 kubelet[1372]: E0804 01:40:10.339229    1372 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-998889.17e862adc9447050\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-998889.17e862adc9447050  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-998889,UID:afa070e1274a0587ba8559359cd730bd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-998889,},FirstTimestamp:2024-08-04 01:35:56.014784592 +0000 UTC m=+459.775729910,LastTimestamp:2024-08-04 01:36:00.025885555 +0000 UTC m=+463.786830874,Count:2,Type:Warning,EventTime:0001-01-01 0
0:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-998889,}"
	Aug 04 01:40:12 ha-998889 kubelet[1372]: I0804 01:40:12.385741    1372 scope.go:117] "RemoveContainer" containerID="1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864"
	Aug 04 01:40:13 ha-998889 kubelet[1372]: I0804 01:40:13.411179    1372 status_manager.go:853] "Failed to get status for pod" podUID="b717f0cd85eef929ccb4647ca0b1eb7b" pod="kube-system/kube-controller-manager-ha-998889" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Aug 04 01:40:16 ha-998889 kubelet[1372]: I0804 01:40:16.397529    1372 scope.go:117] "RemoveContainer" containerID="1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e"
	Aug 04 01:40:16 ha-998889 kubelet[1372]: E0804 01:40:16.467016    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:40:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:40:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:40:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:40:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:40:16 ha-998889 kubelet[1372]: I0804 01:40:16.478560    1372 scope.go:117] "RemoveContainer" containerID="88e6ceb8a3a8cb99a438d980237741ca6d76b66be178c3e6ab3b64740e7b4725"
	Aug 04 01:40:22 ha-998889 kubelet[1372]: I0804 01:40:22.385081    1372 scope.go:117] "RemoveContainer" containerID="011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c"
	Aug 04 01:40:22 ha-998889 kubelet[1372]: E0804 01:40:22.385314    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b2eb4a37-052e-4e8e-9b0d-d58847698eeb)\"" pod="kube-system/storage-provisioner" podUID="b2eb4a37-052e-4e8e-9b0d-d58847698eeb"
	Aug 04 01:40:37 ha-998889 kubelet[1372]: I0804 01:40:37.384954    1372 scope.go:117] "RemoveContainer" containerID="011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c"
	Aug 04 01:40:37 ha-998889 kubelet[1372]: E0804 01:40:37.385245    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b2eb4a37-052e-4e8e-9b0d-d58847698eeb)\"" pod="kube-system/storage-provisioner" podUID="b2eb4a37-052e-4e8e-9b0d-d58847698eeb"
	Aug 04 01:40:39 ha-998889 kubelet[1372]: I0804 01:40:39.280221    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-v468b" podStartSLOduration=570.46213119 podStartE2EDuration="9m33.280158566s" podCreationTimestamp="2024-08-04 01:31:06 +0000 UTC" firstStartedPulling="2024-08-04 01:31:07.31195721 +0000 UTC m=+171.072902529" lastFinishedPulling="2024-08-04 01:31:10.129984575 +0000 UTC m=+173.890929905" observedRunningTime="2024-08-04 01:31:11.212674462 +0000 UTC m=+174.973619801" watchObservedRunningTime="2024-08-04 01:40:39.280158566 +0000 UTC m=+743.041103902"
	Aug 04 01:40:51 ha-998889 kubelet[1372]: I0804 01:40:51.386045    1372 scope.go:117] "RemoveContainer" containerID="011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c"
	Aug 04 01:41:05 ha-998889 kubelet[1372]: I0804 01:41:05.385975    1372 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-998889" podUID="1baf4284-e439-4cfa-b46f-dc618a37580b"
	Aug 04 01:41:05 ha-998889 kubelet[1372]: I0804 01:41:05.440654    1372 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-998889"
	Aug 04 01:41:16 ha-998889 kubelet[1372]: E0804 01:41:16.428832    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:41:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:41:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:41:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:41:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 01:41:53.041321  120150 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19364-90243/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-998889 -n ha-998889
helpers_test.go:261: (dbg) Run:  kubectl --context ha-998889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 stop -v=7 --alsologtostderr: exit status 82 (2m0.478123652s)

                                                
                                                
-- stdout --
	* Stopping node "ha-998889-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:42:13.112246  120558 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:42:13.112344  120558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:42:13.112352  120558 out.go:304] Setting ErrFile to fd 2...
	I0804 01:42:13.112356  120558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:42:13.112529  120558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:42:13.112752  120558 out.go:298] Setting JSON to false
	I0804 01:42:13.112827  120558 mustload.go:65] Loading cluster: ha-998889
	I0804 01:42:13.113229  120558 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:42:13.113312  120558 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:42:13.113527  120558 mustload.go:65] Loading cluster: ha-998889
	I0804 01:42:13.113663  120558 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:42:13.113717  120558 stop.go:39] StopHost: ha-998889-m04
	I0804 01:42:13.114076  120558 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:42:13.114144  120558 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:42:13.129441  120558 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32919
	I0804 01:42:13.129982  120558 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:42:13.130560  120558 main.go:141] libmachine: Using API Version  1
	I0804 01:42:13.130578  120558 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:42:13.130977  120558 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:42:13.133580  120558 out.go:177] * Stopping node "ha-998889-m04"  ...
	I0804 01:42:13.134888  120558 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0804 01:42:13.134935  120558 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:42:13.135194  120558 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0804 01:42:13.135220  120558 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:42:13.138268  120558 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:42:13.138713  120558 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:41:39 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:42:13.138752  120558 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:42:13.138864  120558 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:42:13.139039  120558 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:42:13.139193  120558 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:42:13.139342  120558 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	I0804 01:42:13.228128  120558 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0804 01:42:13.282087  120558 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0804 01:42:13.336132  120558 main.go:141] libmachine: Stopping "ha-998889-m04"...
	I0804 01:42:13.336163  120558 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:42:13.337941  120558 main.go:141] libmachine: (ha-998889-m04) Calling .Stop
	I0804 01:42:13.341835  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 0/120
	I0804 01:42:14.344004  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 1/120
	I0804 01:42:15.345325  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 2/120
	I0804 01:42:16.346887  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 3/120
	I0804 01:42:17.348506  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 4/120
	I0804 01:42:18.351152  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 5/120
	I0804 01:42:19.352705  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 6/120
	I0804 01:42:20.354387  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 7/120
	I0804 01:42:21.355863  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 8/120
	I0804 01:42:22.357415  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 9/120
	I0804 01:42:23.358741  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 10/120
	I0804 01:42:24.360060  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 11/120
	I0804 01:42:25.361631  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 12/120
	I0804 01:42:26.363933  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 13/120
	I0804 01:42:27.365519  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 14/120
	I0804 01:42:28.367591  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 15/120
	I0804 01:42:29.369786  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 16/120
	I0804 01:42:30.371365  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 17/120
	I0804 01:42:31.372575  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 18/120
	I0804 01:42:32.374035  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 19/120
	I0804 01:42:33.376506  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 20/120
	I0804 01:42:34.378042  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 21/120
	I0804 01:42:35.380148  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 22/120
	I0804 01:42:36.382288  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 23/120
	I0804 01:42:37.383763  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 24/120
	I0804 01:42:38.385589  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 25/120
	I0804 01:42:39.386750  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 26/120
	I0804 01:42:40.388142  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 27/120
	I0804 01:42:41.389424  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 28/120
	I0804 01:42:42.390762  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 29/120
	I0804 01:42:43.392796  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 30/120
	I0804 01:42:44.394010  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 31/120
	I0804 01:42:45.395931  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 32/120
	I0804 01:42:46.397318  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 33/120
	I0804 01:42:47.398703  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 34/120
	I0804 01:42:48.400648  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 35/120
	I0804 01:42:49.401897  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 36/120
	I0804 01:42:50.403254  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 37/120
	I0804 01:42:51.404677  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 38/120
	I0804 01:42:52.406471  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 39/120
	I0804 01:42:53.408907  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 40/120
	I0804 01:42:54.411161  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 41/120
	I0804 01:42:55.412294  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 42/120
	I0804 01:42:56.414282  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 43/120
	I0804 01:42:57.415616  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 44/120
	I0804 01:42:58.417699  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 45/120
	I0804 01:42:59.419705  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 46/120
	I0804 01:43:00.421086  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 47/120
	I0804 01:43:01.422373  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 48/120
	I0804 01:43:02.424357  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 49/120
	I0804 01:43:03.426455  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 50/120
	I0804 01:43:04.427705  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 51/120
	I0804 01:43:05.428967  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 52/120
	I0804 01:43:06.430268  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 53/120
	I0804 01:43:07.431624  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 54/120
	I0804 01:43:08.433695  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 55/120
	I0804 01:43:09.434843  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 56/120
	I0804 01:43:10.436528  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 57/120
	I0804 01:43:11.437912  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 58/120
	I0804 01:43:12.439940  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 59/120
	I0804 01:43:13.442292  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 60/120
	I0804 01:43:14.443652  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 61/120
	I0804 01:43:15.445217  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 62/120
	I0804 01:43:16.446528  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 63/120
	I0804 01:43:17.448856  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 64/120
	I0804 01:43:18.450805  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 65/120
	I0804 01:43:19.452847  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 66/120
	I0804 01:43:20.454411  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 67/120
	I0804 01:43:21.455954  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 68/120
	I0804 01:43:22.457108  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 69/120
	I0804 01:43:23.458681  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 70/120
	I0804 01:43:24.459999  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 71/120
	I0804 01:43:25.461401  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 72/120
	I0804 01:43:26.462815  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 73/120
	I0804 01:43:27.464167  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 74/120
	I0804 01:43:28.465698  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 75/120
	I0804 01:43:29.467077  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 76/120
	I0804 01:43:30.468886  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 77/120
	I0804 01:43:31.470536  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 78/120
	I0804 01:43:32.471964  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 79/120
	I0804 01:43:33.474384  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 80/120
	I0804 01:43:34.475864  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 81/120
	I0804 01:43:35.477327  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 82/120
	I0804 01:43:36.478796  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 83/120
	I0804 01:43:37.480211  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 84/120
	I0804 01:43:38.482234  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 85/120
	I0804 01:43:39.483996  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 86/120
	I0804 01:43:40.485314  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 87/120
	I0804 01:43:41.487200  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 88/120
	I0804 01:43:42.488490  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 89/120
	I0804 01:43:43.490608  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 90/120
	I0804 01:43:44.492045  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 91/120
	I0804 01:43:45.493603  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 92/120
	I0804 01:43:46.495989  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 93/120
	I0804 01:43:47.497440  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 94/120
	I0804 01:43:48.499561  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 95/120
	I0804 01:43:49.500843  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 96/120
	I0804 01:43:50.502285  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 97/120
	I0804 01:43:51.503662  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 98/120
	I0804 01:43:52.504994  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 99/120
	I0804 01:43:53.506988  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 100/120
	I0804 01:43:54.508522  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 101/120
	I0804 01:43:55.509879  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 102/120
	I0804 01:43:56.511300  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 103/120
	I0804 01:43:57.512648  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 104/120
	I0804 01:43:58.514803  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 105/120
	I0804 01:43:59.516210  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 106/120
	I0804 01:44:00.517525  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 107/120
	I0804 01:44:01.519267  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 108/120
	I0804 01:44:02.520638  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 109/120
	I0804 01:44:03.522813  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 110/120
	I0804 01:44:04.524188  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 111/120
	I0804 01:44:05.525516  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 112/120
	I0804 01:44:06.528047  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 113/120
	I0804 01:44:07.529397  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 114/120
	I0804 01:44:08.531589  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 115/120
	I0804 01:44:09.532930  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 116/120
	I0804 01:44:10.534591  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 117/120
	I0804 01:44:11.536030  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 118/120
	I0804 01:44:12.537390  120558 main.go:141] libmachine: (ha-998889-m04) Waiting for machine to stop 119/120
	I0804 01:44:13.537914  120558 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0804 01:44:13.537985  120558 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0804 01:44:13.540064  120558 out.go:177] 
	W0804 01:44:13.541497  120558 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0804 01:44:13.541519  120558 out.go:239] * 
	* 
	W0804 01:44:13.544838  120558 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 01:44:13.546128  120558 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-998889 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr: exit status 3 (18.999571594s)

                                                
                                                
-- stdout --
	ha-998889
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-998889-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:44:13.592591  120978 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:44:13.592720  120978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:44:13.592731  120978 out.go:304] Setting ErrFile to fd 2...
	I0804 01:44:13.592737  120978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:44:13.592924  120978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:44:13.593128  120978 out.go:298] Setting JSON to false
	I0804 01:44:13.593160  120978 mustload.go:65] Loading cluster: ha-998889
	I0804 01:44:13.593283  120978 notify.go:220] Checking for updates...
	I0804 01:44:13.593637  120978 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:44:13.593662  120978 status.go:255] checking status of ha-998889 ...
	I0804 01:44:13.594075  120978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:44:13.594158  120978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:44:13.610491  120978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I0804 01:44:13.610999  120978 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:44:13.611584  120978 main.go:141] libmachine: Using API Version  1
	I0804 01:44:13.611614  120978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:44:13.612044  120978 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:44:13.612247  120978 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:44:13.614128  120978 status.go:330] ha-998889 host status = "Running" (err=<nil>)
	I0804 01:44:13.614148  120978 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:44:13.614478  120978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:44:13.614514  120978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:44:13.629830  120978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0804 01:44:13.630202  120978 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:44:13.630625  120978 main.go:141] libmachine: Using API Version  1
	I0804 01:44:13.630649  120978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:44:13.630958  120978 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:44:13.631153  120978 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:44:13.633912  120978 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:44:13.634391  120978 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:44:13.634418  120978 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:44:13.634523  120978 host.go:66] Checking if "ha-998889" exists ...
	I0804 01:44:13.634809  120978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:44:13.634857  120978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:44:13.649932  120978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0804 01:44:13.650396  120978 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:44:13.650861  120978 main.go:141] libmachine: Using API Version  1
	I0804 01:44:13.650888  120978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:44:13.651251  120978 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:44:13.651465  120978 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:44:13.651734  120978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:44:13.651774  120978 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:44:13.655085  120978 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:44:13.655584  120978 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:44:13.655623  120978 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:44:13.655755  120978 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:44:13.655980  120978 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:44:13.656138  120978 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:44:13.656275  120978 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:44:13.744295  120978 ssh_runner.go:195] Run: systemctl --version
	I0804 01:44:13.752285  120978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:44:13.773152  120978 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:44:13.773185  120978 api_server.go:166] Checking apiserver status ...
	I0804 01:44:13.773220  120978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:44:13.791000  120978 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4970/cgroup
	W0804 01:44:13.802142  120978 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4970/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:44:13.802209  120978 ssh_runner.go:195] Run: ls
	I0804 01:44:13.807048  120978 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:44:13.811411  120978 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:44:13.811435  120978 status.go:422] ha-998889 apiserver status = Running (err=<nil>)
	I0804 01:44:13.811444  120978 status.go:257] ha-998889 status: &{Name:ha-998889 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:44:13.811460  120978 status.go:255] checking status of ha-998889-m02 ...
	I0804 01:44:13.811851  120978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:44:13.811902  120978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:44:13.827267  120978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I0804 01:44:13.827721  120978 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:44:13.828180  120978 main.go:141] libmachine: Using API Version  1
	I0804 01:44:13.828202  120978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:44:13.828563  120978 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:44:13.828762  120978 main.go:141] libmachine: (ha-998889-m02) Calling .GetState
	I0804 01:44:13.830541  120978 status.go:330] ha-998889-m02 host status = "Running" (err=<nil>)
	I0804 01:44:13.830562  120978 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:44:13.830874  120978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:44:13.830920  120978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:44:13.846131  120978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0804 01:44:13.846674  120978 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:44:13.847258  120978 main.go:141] libmachine: Using API Version  1
	I0804 01:44:13.847291  120978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:44:13.847622  120978 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:44:13.847843  120978 main.go:141] libmachine: (ha-998889-m02) Calling .GetIP
	I0804 01:44:13.850645  120978 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:44:13.851078  120978 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:39:39 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:44:13.851112  120978 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:44:13.851308  120978 host.go:66] Checking if "ha-998889-m02" exists ...
	I0804 01:44:13.851615  120978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:44:13.851654  120978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:44:13.868731  120978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34841
	I0804 01:44:13.869264  120978 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:44:13.869841  120978 main.go:141] libmachine: Using API Version  1
	I0804 01:44:13.869869  120978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:44:13.870192  120978 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:44:13.870393  120978 main.go:141] libmachine: (ha-998889-m02) Calling .DriverName
	I0804 01:44:13.870672  120978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:44:13.870701  120978 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHHostname
	I0804 01:44:13.873888  120978 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:44:13.874422  120978 main.go:141] libmachine: (ha-998889-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:26:17", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:39:39 +0000 UTC Type:0 Mac:52:54:00:bf:26:17 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:ha-998889-m02 Clientid:01:52:54:00:bf:26:17}
	I0804 01:44:13.874464  120978 main.go:141] libmachine: (ha-998889-m02) DBG | domain ha-998889-m02 has defined IP address 192.168.39.200 and MAC address 52:54:00:bf:26:17 in network mk-ha-998889
	I0804 01:44:13.874613  120978 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHPort
	I0804 01:44:13.874814  120978 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHKeyPath
	I0804 01:44:13.874994  120978 main.go:141] libmachine: (ha-998889-m02) Calling .GetSSHUsername
	I0804 01:44:13.875127  120978 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m02/id_rsa Username:docker}
	I0804 01:44:13.962445  120978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:44:13.981581  120978 kubeconfig.go:125] found "ha-998889" server: "https://192.168.39.254:8443"
	I0804 01:44:13.981613  120978 api_server.go:166] Checking apiserver status ...
	I0804 01:44:13.981650  120978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:44:13.996633  120978 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	W0804 01:44:14.006666  120978 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:44:14.006745  120978 ssh_runner.go:195] Run: ls
	I0804 01:44:14.011322  120978 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0804 01:44:14.017669  120978 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0804 01:44:14.017692  120978 status.go:422] ha-998889-m02 apiserver status = Running (err=<nil>)
	I0804 01:44:14.017701  120978 status.go:257] ha-998889-m02 status: &{Name:ha-998889-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:44:14.017715  120978 status.go:255] checking status of ha-998889-m04 ...
	I0804 01:44:14.018073  120978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:44:14.018118  120978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:44:14.034264  120978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I0804 01:44:14.034744  120978 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:44:14.035305  120978 main.go:141] libmachine: Using API Version  1
	I0804 01:44:14.035332  120978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:44:14.035652  120978 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:44:14.035912  120978 main.go:141] libmachine: (ha-998889-m04) Calling .GetState
	I0804 01:44:14.037728  120978 status.go:330] ha-998889-m04 host status = "Running" (err=<nil>)
	I0804 01:44:14.037747  120978 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:44:14.038091  120978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:44:14.038136  120978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:44:14.056933  120978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0804 01:44:14.057333  120978 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:44:14.057874  120978 main.go:141] libmachine: Using API Version  1
	I0804 01:44:14.057906  120978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:44:14.058237  120978 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:44:14.058424  120978 main.go:141] libmachine: (ha-998889-m04) Calling .GetIP
	I0804 01:44:14.061116  120978 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:44:14.061538  120978 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:41:39 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:44:14.061569  120978 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:44:14.061694  120978 host.go:66] Checking if "ha-998889-m04" exists ...
	I0804 01:44:14.061997  120978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:44:14.062032  120978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:44:14.078922  120978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0804 01:44:14.079329  120978 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:44:14.079788  120978 main.go:141] libmachine: Using API Version  1
	I0804 01:44:14.079813  120978 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:44:14.080178  120978 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:44:14.080387  120978 main.go:141] libmachine: (ha-998889-m04) Calling .DriverName
	I0804 01:44:14.080589  120978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:44:14.080611  120978 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHHostname
	I0804 01:44:14.083353  120978 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:44:14.083812  120978 main.go:141] libmachine: (ha-998889-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:fd:1f", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:41:39 +0000 UTC Type:0 Mac:52:54:00:19:fd:1f Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-998889-m04 Clientid:01:52:54:00:19:fd:1f}
	I0804 01:44:14.083843  120978 main.go:141] libmachine: (ha-998889-m04) DBG | domain ha-998889-m04 has defined IP address 192.168.39.183 and MAC address 52:54:00:19:fd:1f in network mk-ha-998889
	I0804 01:44:14.083994  120978 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHPort
	I0804 01:44:14.084166  120978 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHKeyPath
	I0804 01:44:14.084300  120978 main.go:141] libmachine: (ha-998889-m04) Calling .GetSSHUsername
	I0804 01:44:14.084425  120978 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889-m04/id_rsa Username:docker}
	W0804 01:44:32.545659  120978 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.183:22: connect: no route to host
	W0804 01:44:32.545772  120978 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	E0804 01:44:32.545808  120978 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host
	I0804 01:44:32.545822  120978 status.go:257] ha-998889-m04 status: &{Name:ha-998889-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0804 01:44:32.545840  120978 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.183:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-998889 -n ha-998889
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-998889 logs -n 25: (1.781139775s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-998889 ssh -n ha-998889-m02 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04:/home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m04 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp testdata/cp-test.txt                                                | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1256674419/001/cp-test_ha-998889-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889:/home/docker/cp-test_ha-998889-m04_ha-998889.txt                       |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889 sudo cat                                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889.txt                                 |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m02:/home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m02 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m03:/home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n                                                                 | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | ha-998889-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-998889 ssh -n ha-998889-m03 sudo cat                                          | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC | 04 Aug 24 01:32 UTC |
	|         | /home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-998889 node stop m02 -v=7                                                     | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-998889 node start m02 -v=7                                                    | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-998889 -v=7                                                           | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-998889 -v=7                                                                | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:35 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-998889 --wait=true -v=7                                                    | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:37 UTC | 04 Aug 24 01:41 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-998889                                                                | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:41 UTC |                     |
	| node    | ha-998889 node delete m03 -v=7                                                   | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:41 UTC | 04 Aug 24 01:42 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-998889 stop -v=7                                                              | ha-998889 | jenkins | v1.33.1 | 04 Aug 24 01:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 01:37:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 01:37:50.819879  118832 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:37:50.820493  118832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:37:50.820511  118832 out.go:304] Setting ErrFile to fd 2...
	I0804 01:37:50.820518  118832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:37:50.821116  118832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:37:50.821721  118832 out.go:298] Setting JSON to false
	I0804 01:37:50.822684  118832 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12015,"bootTime":1722723456,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 01:37:50.822757  118832 start.go:139] virtualization: kvm guest
	I0804 01:37:50.825063  118832 out.go:177] * [ha-998889] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 01:37:50.826703  118832 notify.go:220] Checking for updates...
	I0804 01:37:50.826715  118832 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 01:37:50.828199  118832 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 01:37:50.830086  118832 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:37:50.831545  118832 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:37:50.832847  118832 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 01:37:50.834196  118832 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 01:37:50.835909  118832 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:37:50.836015  118832 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 01:37:50.836466  118832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:37:50.836542  118832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:37:50.851757  118832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0804 01:37:50.852171  118832 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:37:50.852780  118832 main.go:141] libmachine: Using API Version  1
	I0804 01:37:50.852812  118832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:37:50.853146  118832 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:37:50.853386  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:37:50.891026  118832 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 01:37:50.892241  118832 start.go:297] selected driver: kvm2
	I0804 01:37:50.892252  118832 start.go:901] validating driver "kvm2" against &{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:37:50.892396  118832 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 01:37:50.892711  118832 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 01:37:50.892781  118832 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 01:37:50.907886  118832 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 01:37:50.908792  118832 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 01:37:50.908877  118832 cni.go:84] Creating CNI manager for ""
	I0804 01:37:50.908893  118832 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0804 01:37:50.908999  118832 start.go:340] cluster config:
	{Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:37:50.909174  118832 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 01:37:50.911602  118832 out.go:177] * Starting "ha-998889" primary control-plane node in "ha-998889" cluster
	I0804 01:37:50.912806  118832 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:37:50.912836  118832 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 01:37:50.912845  118832 cache.go:56] Caching tarball of preloaded images
	I0804 01:37:50.912947  118832 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 01:37:50.912958  118832 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 01:37:50.913072  118832 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/config.json ...
	I0804 01:37:50.913254  118832 start.go:360] acquireMachinesLock for ha-998889: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 01:37:50.913294  118832 start.go:364] duration metric: took 22.304µs to acquireMachinesLock for "ha-998889"
	I0804 01:37:50.913308  118832 start.go:96] Skipping create...Using existing machine configuration
	I0804 01:37:50.913316  118832 fix.go:54] fixHost starting: 
	I0804 01:37:50.913603  118832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:37:50.913648  118832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:37:50.928415  118832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44451
	I0804 01:37:50.928816  118832 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:37:50.929287  118832 main.go:141] libmachine: Using API Version  1
	I0804 01:37:50.929313  118832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:37:50.929657  118832 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:37:50.929882  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:37:50.930069  118832 main.go:141] libmachine: (ha-998889) Calling .GetState
	I0804 01:37:50.931578  118832 fix.go:112] recreateIfNeeded on ha-998889: state=Running err=<nil>
	W0804 01:37:50.931612  118832 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 01:37:50.933822  118832 out.go:177] * Updating the running kvm2 "ha-998889" VM ...
	I0804 01:37:50.935031  118832 machine.go:94] provisionDockerMachine start ...
	I0804 01:37:50.935048  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:37:50.935285  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:50.937566  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:50.938059  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:50.938085  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:50.938228  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:50.938413  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:50.938575  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:50.938709  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:50.938861  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:37:50.939095  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:37:50.939107  118832 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 01:37:51.050432  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889
	
	I0804 01:37:51.050473  118832 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:37:51.050739  118832 buildroot.go:166] provisioning hostname "ha-998889"
	I0804 01:37:51.050766  118832 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:37:51.050981  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:51.053799  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.054252  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.054279  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.054429  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:51.054594  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.054748  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.054924  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:51.055062  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:37:51.055246  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:37:51.055259  118832 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-998889 && echo "ha-998889" | sudo tee /etc/hostname
	I0804 01:37:51.183699  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-998889
	
	I0804 01:37:51.183724  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:51.186905  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.187333  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.187362  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.187566  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:51.187783  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.187975  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.188112  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:51.188295  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:37:51.188471  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:37:51.188486  118832 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-998889' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-998889/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-998889' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 01:37:51.298379  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 01:37:51.298433  118832 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 01:37:51.298465  118832 buildroot.go:174] setting up certificates
	I0804 01:37:51.298479  118832 provision.go:84] configureAuth start
	I0804 01:37:51.298495  118832 main.go:141] libmachine: (ha-998889) Calling .GetMachineName
	I0804 01:37:51.298857  118832 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:37:51.301447  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.301923  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.301953  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.302076  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:51.304734  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.305120  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.305154  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.305282  118832 provision.go:143] copyHostCerts
	I0804 01:37:51.305311  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:37:51.305347  118832 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 01:37:51.305420  118832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 01:37:51.305508  118832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 01:37:51.305607  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:37:51.305628  118832 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 01:37:51.305633  118832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 01:37:51.305657  118832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 01:37:51.305717  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:37:51.305733  118832 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 01:37:51.305737  118832 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 01:37:51.305758  118832 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 01:37:51.305816  118832 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.ha-998889 san=[127.0.0.1 192.168.39.12 ha-998889 localhost minikube]
	I0804 01:37:51.848379  118832 provision.go:177] copyRemoteCerts
	I0804 01:37:51.848444  118832 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 01:37:51.848474  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:51.850980  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.851287  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:51.851323  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:51.851431  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:51.851639  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:51.851806  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:51.851933  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:37:51.937451  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 01:37:51.937553  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0804 01:37:51.963185  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 01:37:51.963263  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 01:37:51.988852  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 01:37:51.988923  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 01:37:52.016309  118832 provision.go:87] duration metric: took 717.812724ms to configureAuth
	I0804 01:37:52.016340  118832 buildroot.go:189] setting minikube options for container-runtime
	I0804 01:37:52.016619  118832 config.go:182] Loaded profile config "ha-998889": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:37:52.016711  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:37:52.019061  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:52.019438  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:37:52.019464  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:37:52.019593  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:37:52.019781  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:52.019937  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:37:52.020101  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:37:52.020329  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:37:52.020542  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:37:52.020559  118832 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 01:39:22.919739  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 01:39:22.919777  118832 machine.go:97] duration metric: took 1m31.984732599s to provisionDockerMachine
	I0804 01:39:22.919797  118832 start.go:293] postStartSetup for "ha-998889" (driver="kvm2")
	I0804 01:39:22.919815  118832 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 01:39:22.919842  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:22.920221  118832 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 01:39:22.920252  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:22.923569  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:22.924009  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:22.924043  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:22.924213  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:22.924408  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:22.924578  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:22.924755  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:39:23.014066  118832 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 01:39:23.018775  118832 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 01:39:23.018808  118832 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 01:39:23.018873  118832 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 01:39:23.019007  118832 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 01:39:23.019025  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 01:39:23.019132  118832 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 01:39:23.029382  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:39:23.054834  118832 start.go:296] duration metric: took 135.020403ms for postStartSetup
	I0804 01:39:23.054878  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.055212  118832 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0804 01:39:23.055246  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:23.057917  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.058335  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.058362  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.058530  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:23.058696  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.058825  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:23.058931  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	W0804 01:39:23.144531  118832 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0804 01:39:23.144559  118832 fix.go:56] duration metric: took 1m32.231243673s for fixHost
	I0804 01:39:23.144582  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:23.147279  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.147638  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.147667  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.147799  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:23.148019  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.148180  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.148367  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:23.148559  118832 main.go:141] libmachine: Using SSH client type: native
	I0804 01:39:23.148747  118832 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0804 01:39:23.148760  118832 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 01:39:23.258458  118832 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722735563.225823179
	
	I0804 01:39:23.258481  118832 fix.go:216] guest clock: 1722735563.225823179
	I0804 01:39:23.258488  118832 fix.go:229] Guest: 2024-08-04 01:39:23.225823179 +0000 UTC Remote: 2024-08-04 01:39:23.144567352 +0000 UTC m=+92.360079634 (delta=81.255827ms)
	I0804 01:39:23.258530  118832 fix.go:200] guest clock delta is within tolerance: 81.255827ms
	I0804 01:39:23.258538  118832 start.go:83] releasing machines lock for "ha-998889", held for 1m32.345235583s
	I0804 01:39:23.258558  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.258817  118832 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:39:23.261393  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.261856  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.261901  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.262061  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.262611  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.262797  118832 main.go:141] libmachine: (ha-998889) Calling .DriverName
	I0804 01:39:23.262900  118832 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 01:39:23.262946  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:23.263071  118832 ssh_runner.go:195] Run: cat /version.json
	I0804 01:39:23.263100  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHHostname
	I0804 01:39:23.265696  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.265834  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.266099  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.266138  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.266266  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:23.266287  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:23.266312  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:23.266441  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.266445  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHPort
	I0804 01:39:23.266622  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:23.266703  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHKeyPath
	I0804 01:39:23.266780  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:39:23.266836  118832 main.go:141] libmachine: (ha-998889) Calling .GetSSHUsername
	I0804 01:39:23.266969  118832 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/ha-998889/id_rsa Username:docker}
	I0804 01:39:23.365134  118832 ssh_runner.go:195] Run: systemctl --version
	I0804 01:39:23.371698  118832 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 01:39:23.533286  118832 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 01:39:23.542177  118832 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 01:39:23.542251  118832 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 01:39:23.552285  118832 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 01:39:23.552323  118832 start.go:495] detecting cgroup driver to use...
	I0804 01:39:23.552410  118832 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 01:39:23.568759  118832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 01:39:23.582751  118832 docker.go:217] disabling cri-docker service (if available) ...
	I0804 01:39:23.582810  118832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 01:39:23.596566  118832 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 01:39:23.610638  118832 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 01:39:23.762526  118832 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 01:39:23.912937  118832 docker.go:233] disabling docker service ...
	I0804 01:39:23.913016  118832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 01:39:23.930695  118832 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 01:39:23.944819  118832 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 01:39:24.088982  118832 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 01:39:24.233893  118832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 01:39:24.248959  118832 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 01:39:24.268905  118832 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 01:39:24.268969  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.279582  118832 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 01:39:24.279655  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.290030  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.300992  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.311651  118832 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 01:39:24.322480  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.332847  118832 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.345218  118832 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 01:39:24.356338  118832 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 01:39:24.366374  118832 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 01:39:24.376651  118832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:39:24.520490  118832 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 01:39:27.508474  118832 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.987941441s)
	I0804 01:39:27.508507  118832 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 01:39:27.508571  118832 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 01:39:27.514468  118832 start.go:563] Will wait 60s for crictl version
	I0804 01:39:27.514529  118832 ssh_runner.go:195] Run: which crictl
	I0804 01:39:27.518222  118832 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 01:39:27.561914  118832 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 01:39:27.561993  118832 ssh_runner.go:195] Run: crio --version
	I0804 01:39:27.592019  118832 ssh_runner.go:195] Run: crio --version
	I0804 01:39:27.623034  118832 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 01:39:27.624470  118832 main.go:141] libmachine: (ha-998889) Calling .GetIP
	I0804 01:39:27.626952  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:27.627301  118832 main.go:141] libmachine: (ha-998889) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:37:c1", ip: ""} in network mk-ha-998889: {Iface:virbr1 ExpiryTime:2024-08-04 02:27:48 +0000 UTC Type:0 Mac:52:54:00:3a:37:c1 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:ha-998889 Clientid:01:52:54:00:3a:37:c1}
	I0804 01:39:27.627322  118832 main.go:141] libmachine: (ha-998889) DBG | domain ha-998889 has defined IP address 192.168.39.12 and MAC address 52:54:00:3a:37:c1 in network mk-ha-998889
	I0804 01:39:27.627554  118832 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 01:39:27.632760  118832 kubeadm.go:883] updating cluster {Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 01:39:27.632900  118832 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 01:39:27.632942  118832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 01:39:27.678362  118832 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 01:39:27.678386  118832 crio.go:433] Images already preloaded, skipping extraction
	I0804 01:39:27.678435  118832 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 01:39:27.716304  118832 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 01:39:27.716329  118832 cache_images.go:84] Images are preloaded, skipping loading
	I0804 01:39:27.716342  118832 kubeadm.go:934] updating node { 192.168.39.12 8443 v1.30.3 crio true true} ...
	I0804 01:39:27.716469  118832 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-998889 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 01:39:27.716555  118832 ssh_runner.go:195] Run: crio config
	I0804 01:39:27.765435  118832 cni.go:84] Creating CNI manager for ""
	I0804 01:39:27.765464  118832 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0804 01:39:27.765477  118832 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 01:39:27.765507  118832 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-998889 NodeName:ha-998889 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 01:39:27.765695  118832 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-998889"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 01:39:27.765725  118832 kube-vip.go:115] generating kube-vip config ...
	I0804 01:39:27.765779  118832 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0804 01:39:27.777478  118832 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0804 01:39:27.777604  118832 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0804 01:39:27.777673  118832 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 01:39:27.788015  118832 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 01:39:27.788090  118832 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0804 01:39:27.798706  118832 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0804 01:39:27.817002  118832 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 01:39:27.834875  118832 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0804 01:39:27.852791  118832 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0804 01:39:27.871384  118832 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0804 01:39:27.875698  118832 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 01:39:28.026791  118832 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 01:39:28.041409  118832 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889 for IP: 192.168.39.12
	I0804 01:39:28.041432  118832 certs.go:194] generating shared ca certs ...
	I0804 01:39:28.041448  118832 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:39:28.041657  118832 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 01:39:28.041713  118832 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 01:39:28.041727  118832 certs.go:256] generating profile certs ...
	I0804 01:39:28.041824  118832 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/client.key
	I0804 01:39:28.041859  118832 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.3756aa09
	I0804 01:39:28.041884  118832 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.3756aa09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.12 192.168.39.200 192.168.39.148 192.168.39.254]
	I0804 01:39:28.107335  118832 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.3756aa09 ...
	I0804 01:39:28.107371  118832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.3756aa09: {Name:mk8487245ed0129d14fed5abbd35e04bb8f4a32f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:39:28.107563  118832 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.3756aa09 ...
	I0804 01:39:28.107583  118832 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.3756aa09: {Name:mk32e3f0283c85bf8bfebc6f456027cbc544d49f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 01:39:28.107695  118832 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt.3756aa09 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt
	I0804 01:39:28.107879  118832 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key.3756aa09 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key
	I0804 01:39:28.108072  118832 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key
	I0804 01:39:28.108091  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 01:39:28.108106  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 01:39:28.108121  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 01:39:28.108147  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 01:39:28.108164  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 01:39:28.108183  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 01:39:28.108208  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 01:39:28.108226  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 01:39:28.108288  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 01:39:28.108326  118832 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 01:39:28.108338  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 01:39:28.108379  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 01:39:28.108409  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 01:39:28.108444  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 01:39:28.108500  118832 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 01:39:28.108536  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 01:39:28.108557  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:39:28.108574  118832 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 01:39:28.109206  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 01:39:28.135802  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 01:39:28.160655  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 01:39:28.186312  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 01:39:28.210717  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0804 01:39:28.236019  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 01:39:28.290731  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 01:39:28.316103  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/ha-998889/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 01:39:28.341244  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 01:39:28.367717  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 01:39:28.392260  118832 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 01:39:28.416664  118832 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 01:39:28.434059  118832 ssh_runner.go:195] Run: openssl version
	I0804 01:39:28.439998  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 01:39:28.450790  118832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 01:39:28.455535  118832 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 01:39:28.455583  118832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 01:39:28.461261  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 01:39:28.470432  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 01:39:28.480819  118832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:39:28.485222  118832 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:39:28.485268  118832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 01:39:28.490947  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 01:39:28.500081  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 01:39:28.510831  118832 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 01:39:28.515405  118832 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 01:39:28.515453  118832 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 01:39:28.521202  118832 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 01:39:28.530690  118832 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 01:39:28.535560  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 01:39:28.541294  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 01:39:28.547235  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 01:39:28.552846  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 01:39:28.558925  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 01:39:28.564644  118832 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 01:39:28.570087  118832 kubeadm.go:392] StartCluster: {Name:ha-998889 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-998889 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.200 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.183 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:39:28.570254  118832 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 01:39:28.570323  118832 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 01:39:28.611727  118832 cri.go:89] found id: "ec86579bf6c158df3821fb9dbec8faef8aa3d568dab1a5d1f7159056eb280795"
	I0804 01:39:28.611755  118832 cri.go:89] found id: "88e6ceb8a3a8cb99a438d980237741ca6d76b66be178c3e6ab3b64740e7b4725"
	I0804 01:39:28.611760  118832 cri.go:89] found id: "9689d7b18576bd7a530601f23fd61732e372c717c0773fbf8e9545eeea3f25ad"
	I0804 01:39:28.611763  118832 cri.go:89] found id: "7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947"
	I0804 01:39:28.611766  118832 cri.go:89] found id: "fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9"
	I0804 01:39:28.611769  118832 cri.go:89] found id: "426453d5275e580d04fe66a71768029c0648676dd6d8940d130f578bd5c38184"
	I0804 01:39:28.611771  118832 cri.go:89] found id: "e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957"
	I0804 01:39:28.611774  118832 cri.go:89] found id: "e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372"
	I0804 01:39:28.611776  118832 cri.go:89] found id: "95795d7d25530e5e65e05005ab4d7ef06b9aa7ebf5a75a5acd929285e96eb81a"
	I0804 01:39:28.611781  118832 cri.go:89] found id: "cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6"
	I0804 01:39:28.611783  118832 cri.go:89] found id: "3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df"
	I0804 01:39:28.611799  118832 cri.go:89] found id: "0c31b954330c44a60bd34998fab563790c0dce116b2e3e3f1170afce41a8e977"
	I0804 01:39:28.611801  118832 cri.go:89] found id: "8d16347be7d62104da79301d96bf9ce930b270d3e989d2b1067d094179991318"
	I0804 01:39:28.611803  118832 cri.go:89] found id: ""
	I0804 01:39:28.611848  118832 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.191240313Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735873191216174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4eb64f4c-52f6-4150-b2af-d3380d859257 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.191693515Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2839f908-b821-46b7-916e-d4b00d4ff583 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.191761239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2839f908-b821-46b7-916e-d4b00d4ff583 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.192257246Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2769cff2a2b2d4825012559bed9bb50af3c2f39380afc7356e8d0a6b6f3eb218,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722735651405627687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722735616432668420,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722735612413184013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:183f6a6f77331a1fb20eeae57c71ce1dec8f350f0fe0c423c6fe4dbde357ccfe,PodSandboxId:5d68c8d7e7c12618843997c81fb5620722085b8e43a585772cdcad0ecacfaf1e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735607741527540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722735605395515152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b434b44f1bf118b16a5b0a2fad732e246821c6d24e8f7e96a958348f6d2d2913,PodSandboxId:3f4471219e95e097c42916bb1033bb9b290dc8ed46552ad064046c11f5d7e35a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722735585880347092,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a93626eb8196dbb6199516a79b5b7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5,PodSandboxId:88980a4edc1a46ad05e16b741205e29e7110029806cc4d56796ac5fe8e94424e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722735574554530787,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12,PodSandboxId:3aadcc23e0102801456d04054c7c1db54a4f44806fcd9f3b88246684b01da8fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722735574614082607,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb87ebf
1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b,PodSandboxId:44a59ecb458198d259fe1bc852518aaef857bcd8368cfd374a478318abcb3692,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574600582082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7,PodSandboxId:71b522b8398227d22bd4d75fac6b504d7eeb12c43833008d488abcee3fc98e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574550208095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722735574254077775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8,PodSandboxId:97fe4c22a42659dd60cfc446982ac2a1fac81004c41636bc641046253cc77bc9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722735574346146884,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722735574328643003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d,PodSandboxId:74c4aa5b4cf9edbdb0d3e0eb8df0a845a9135b39424b43f148d65609cdb147cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722735574272482708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Ann
otations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722735070152369221,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927898145711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927839045470,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722734915708486914,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722734910732550281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722734890252434611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722734890219676608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2839f908-b821-46b7-916e-d4b00d4ff583 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.234770189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=977cd542-36c7-4472-b1b7-95e90b6d7a4e name=/runtime.v1.RuntimeService/Version
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.235127445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=977cd542-36c7-4472-b1b7-95e90b6d7a4e name=/runtime.v1.RuntimeService/Version
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.237027476Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cddccc6b-18e7-481a-a017-5946fbfd612a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.237465794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735873237442749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cddccc6b-18e7-481a-a017-5946fbfd612a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.238563766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ec562a3-b32b-43ef-93c4-4a8cfccbee13 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.238638850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ec562a3-b32b-43ef-93c4-4a8cfccbee13 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.239096419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2769cff2a2b2d4825012559bed9bb50af3c2f39380afc7356e8d0a6b6f3eb218,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722735651405627687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722735616432668420,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722735612413184013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:183f6a6f77331a1fb20eeae57c71ce1dec8f350f0fe0c423c6fe4dbde357ccfe,PodSandboxId:5d68c8d7e7c12618843997c81fb5620722085b8e43a585772cdcad0ecacfaf1e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735607741527540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722735605395515152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b434b44f1bf118b16a5b0a2fad732e246821c6d24e8f7e96a958348f6d2d2913,PodSandboxId:3f4471219e95e097c42916bb1033bb9b290dc8ed46552ad064046c11f5d7e35a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722735585880347092,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a93626eb8196dbb6199516a79b5b7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5,PodSandboxId:88980a4edc1a46ad05e16b741205e29e7110029806cc4d56796ac5fe8e94424e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722735574554530787,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12,PodSandboxId:3aadcc23e0102801456d04054c7c1db54a4f44806fcd9f3b88246684b01da8fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722735574614082607,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb87ebf
1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b,PodSandboxId:44a59ecb458198d259fe1bc852518aaef857bcd8368cfd374a478318abcb3692,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574600582082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7,PodSandboxId:71b522b8398227d22bd4d75fac6b504d7eeb12c43833008d488abcee3fc98e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574550208095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722735574254077775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8,PodSandboxId:97fe4c22a42659dd60cfc446982ac2a1fac81004c41636bc641046253cc77bc9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722735574346146884,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722735574328643003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d,PodSandboxId:74c4aa5b4cf9edbdb0d3e0eb8df0a845a9135b39424b43f148d65609cdb147cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722735574272482708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Ann
otations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722735070152369221,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927898145711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927839045470,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722734915708486914,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722734910732550281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722734890252434611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722734890219676608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ec562a3-b32b-43ef-93c4-4a8cfccbee13 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.283115123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e63e301-0c1c-4813-a35e-42778d75b6f0 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.283209633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e63e301-0c1c-4813-a35e-42778d75b6f0 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.284660741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f044b4c-6abe-43e7-b379-81cb0b3566aa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.285599474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735873285475643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f044b4c-6abe-43e7-b379-81cb0b3566aa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.286553907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa77bbd3-3875-45ad-ba54-cc793585fbc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.286627966Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa77bbd3-3875-45ad-ba54-cc793585fbc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.287481799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2769cff2a2b2d4825012559bed9bb50af3c2f39380afc7356e8d0a6b6f3eb218,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722735651405627687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722735616432668420,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722735612413184013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:183f6a6f77331a1fb20eeae57c71ce1dec8f350f0fe0c423c6fe4dbde357ccfe,PodSandboxId:5d68c8d7e7c12618843997c81fb5620722085b8e43a585772cdcad0ecacfaf1e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735607741527540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722735605395515152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b434b44f1bf118b16a5b0a2fad732e246821c6d24e8f7e96a958348f6d2d2913,PodSandboxId:3f4471219e95e097c42916bb1033bb9b290dc8ed46552ad064046c11f5d7e35a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722735585880347092,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a93626eb8196dbb6199516a79b5b7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5,PodSandboxId:88980a4edc1a46ad05e16b741205e29e7110029806cc4d56796ac5fe8e94424e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722735574554530787,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12,PodSandboxId:3aadcc23e0102801456d04054c7c1db54a4f44806fcd9f3b88246684b01da8fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722735574614082607,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb87ebf
1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b,PodSandboxId:44a59ecb458198d259fe1bc852518aaef857bcd8368cfd374a478318abcb3692,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574600582082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7,PodSandboxId:71b522b8398227d22bd4d75fac6b504d7eeb12c43833008d488abcee3fc98e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574550208095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722735574254077775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8,PodSandboxId:97fe4c22a42659dd60cfc446982ac2a1fac81004c41636bc641046253cc77bc9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722735574346146884,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722735574328643003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d,PodSandboxId:74c4aa5b4cf9edbdb0d3e0eb8df0a845a9135b39424b43f148d65609cdb147cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722735574272482708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Ann
otations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722735070152369221,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927898145711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927839045470,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722734915708486914,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722734910732550281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722734890252434611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722734890219676608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa77bbd3-3875-45ad-ba54-cc793585fbc7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.333953040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46bc2743-b80d-4020-be1f-81bcda71c011 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.334046983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46bc2743-b80d-4020-be1f-81bcda71c011 name=/runtime.v1.RuntimeService/Version
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.335252702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e55f125c-5933-442b-9728-f4263858041c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.335714164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722735873335691397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e55f125c-5933-442b-9728-f4263858041c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.336394364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77a394e7-0e6d-4b89-9ee7-2f9bfab5ee9a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.336457279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77a394e7-0e6d-4b89-9ee7-2f9bfab5ee9a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 01:44:33 ha-998889 crio[3722]: time="2024-08-04 01:44:33.336919607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2769cff2a2b2d4825012559bed9bb50af3c2f39380afc7356e8d0a6b6f3eb218,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722735651405627687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722735616432668420,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722735612413184013,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:183f6a6f77331a1fb20eeae57c71ce1dec8f350f0fe0c423c6fe4dbde357ccfe,PodSandboxId:5d68c8d7e7c12618843997c81fb5620722085b8e43a585772cdcad0ecacfaf1e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722735607741527540,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annotations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c,PodSandboxId:b8b3aa4054b5bf789b7d2e9ffd7a908aa384cc8035b66d19c7145d14e5671f9c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722735605395515152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2eb4a37-052e-4e8e-9b0d-d58847698eeb,},Annotations:map[string]string{io.kubernetes.container.hash: 624da2e0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b434b44f1bf118b16a5b0a2fad732e246821c6d24e8f7e96a958348f6d2d2913,PodSandboxId:3f4471219e95e097c42916bb1033bb9b290dc8ed46552ad064046c11f5d7e35a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722735585880347092,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b2a93626eb8196dbb6199516a79b5b7,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5,PodSandboxId:88980a4edc1a46ad05e16b741205e29e7110029806cc4d56796ac5fe8e94424e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722735574554530787,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12,PodSandboxId:3aadcc23e0102801456d04054c7c1db54a4f44806fcd9f3b88246684b01da8fb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722735574614082607,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb87ebf
1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b,PodSandboxId:44a59ecb458198d259fe1bc852518aaef857bcd8368cfd374a478318abcb3692,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574600582082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kubernetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7,PodSandboxId:71b522b8398227d22bd4d75fac6b504d7eeb12c43833008d488abcee3fc98e38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722735574550208095,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864,PodSandboxId:ca146213b85b7312127ad92cfcda823e4e7c16171b8449a7385e9a1c177d5952,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722735574254077775,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-998889,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: afa070e1274a0587ba8559359cd730bd,},Annotations:map[string]string{io.kubernetes.container.hash: bde38c8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8,PodSandboxId:97fe4c22a42659dd60cfc446982ac2a1fac81004c41636bc641046253cc77bc9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722735574346146884,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e,PodSandboxId:923dc81c82e1eca3bf11d7da10a1320a327fe3f0b950bb95fb40b01f1937d48b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722735574328643003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: b717f0cd85eef929ccb4647ca0b1eb7b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d,PodSandboxId:74c4aa5b4cf9edbdb0d3e0eb8df0a845a9135b39424b43f148d65609cdb147cc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722735574272482708,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Ann
otations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bb7230a6669322c96b5d99c8ecf904c8abe59db86e2860a99da71ee29eb33f3,PodSandboxId:5b4550fd8d43d8d495bd0a04e214926ee63dcd646af30779dba620f69ce82048,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722735070152369221,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-v468b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c062b796-79d1-45d8-8bbf-2c7ef1cb8f8c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4dcbb187,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947,PodSandboxId:3037e05c8f0db399a997cca2bb3789a77f09cadf5c2cc9bd590f5d056ec77f91,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927898145711,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b8ds7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7c997bc-312e-488c-ad30-0647eb5b757e,},Annotations:map[string]string{io.kube
rnetes.container.hash: 82f7f26f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9,PodSandboxId:a3cc1795993d6d2ea75fa8693058547ed31cfb95a0c9e2a99b9be2a8e9adeec5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722734927839045470,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ddb5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 186999bf-43e4-43e7-a5dc-c84331a2f521,},Annotations:map[string]string{io.kubernetes.container.hash: bd7e4104,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957,PodSandboxId:120c9a2eb52aaf3d8752a62bb722384470f90152223b9734365ff0ab48ea3983,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722734915708486914,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gc22h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db5d63c3-4231-45ae-a2e2-b48fbf64be91,},Annotations:map[string]string{io.kubernetes.container.hash: 7d42c2a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372,PodSandboxId:9689d6db72b0213c2542aedfbe01e29fd516c523078d167e8a92fb53f09164d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722734910732550281,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-56twz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fc726d-cf1c-44a8-839e-84b90f69609f,},Annotations:map[string]string{io.kubernetes.container.hash: e6d92105,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6,PodSandboxId:580e42f37b240b7131711e81c39d698bb1558a7ee2700411584f17275a0b0fb1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722734890252434611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5fe8345bac861dc04b66054949f60121,},Annotations:map[string]string{io.kubernetes.container.hash: 5eb2f319,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df,PodSandboxId:c25b0800264cf39cffb0e952097275a4b5ff1129e170a6adfe60187ce9df544f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722734890219676608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-998889,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2e8ba5672f9e6a88e2c591f60ebe757,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77a394e7-0e6d-4b89-9ee7-2f9bfab5ee9a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2769cff2a2b2d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   b8b3aa4054b5b       storage-provisioner
	abbde27610f2f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   923dc81c82e1e       kube-controller-manager-ha-998889
	ae15bd3bdf8b5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   ca146213b85b7       kube-apiserver-ha-998889
	183f6a6f77331       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   5d68c8d7e7c12       busybox-fc5497c4f-v468b
	011410390b0d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   b8b3aa4054b5b       storage-provisioner
	b434b44f1bf11       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   3f4471219e95e       kube-vip-ha-998889
	316140ed791e6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   3aadcc23e0102       kindnet-gc22h
	2cb87ebf1b446       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   44a59ecb45819       coredns-7db6d8ff4d-b8ds7
	11f3f2daf285f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   88980a4edc1a4       kube-proxy-56twz
	e84d839561d00       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   71b522b839822       coredns-7db6d8ff4d-ddb5m
	2d5d507e714a2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   97fe4c22a4265       kube-scheduler-ha-998889
	1dd3b269ecfda       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Exited              kube-controller-manager   1                   923dc81c82e1e       kube-controller-manager-ha-998889
	9f5483ec0343a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   74c4aa5b4cf9e       etcd-ha-998889
	1460950dd5a80       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Exited              kube-apiserver            2                   ca146213b85b7       kube-apiserver-ha-998889
	1bb7230a66693       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   5b4550fd8d43d       busybox-fc5497c4f-v468b
	7ce1fc9d2ceb3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   3037e05c8f0db       coredns-7db6d8ff4d-b8ds7
	fe75909603216       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   a3cc1795993d6       coredns-7db6d8ff4d-ddb5m
	e987e973e97a5       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    15 minutes ago      Exited              kindnet-cni               0                   120c9a2eb52aa       kindnet-gc22h
	e32fb23a61d2d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   9689d6db72b02       kube-proxy-56twz
	cbd934bafbbf1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   580e42f37b240       etcd-ha-998889
	3f264e5c2143d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   c25b0800264cf       kube-scheduler-ha-998889
	
	
	==> coredns [2cb87ebf1b4462245fd74b2f591faf5c5c42d2b44d6e09789a4985a0f33b9f6b] <==
	[INFO] plugin/kubernetes: Trace[897242627]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 01:39:43.902) (total time: 10001ms):
	Trace[897242627]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:39:53.903)
	Trace[897242627]: [10.00125379s] [10.00125379s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[555702147]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 01:39:46.341) (total time: 10723ms):
	Trace[555702147]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39240->10.96.0.1:443: read: connection reset by peer 10722ms (01:39:57.064)
	Trace[555702147]: [10.723707768s] [10.723707768s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:39240->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36692->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:36692->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7ce1fc9d2ceb3411d7ea657d88612ea7b9d4a84e04872677f0029d1db6afa947] <==
	[INFO] 10.244.1.2:54493 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154283s
	[INFO] 10.244.1.2:45366 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000188537s
	[INFO] 10.244.1.2:42179 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223485s
	[INFO] 10.244.2.2:48925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000257001s
	[INFO] 10.244.2.2:46133 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001441239s
	[INFO] 10.244.2.2:40620 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108193s
	[INFO] 10.244.2.2:45555 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071897s
	[INFO] 10.244.0.4:57133 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00007622s
	[INFO] 10.244.0.4:45128 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012024s
	[INFO] 10.244.0.4:33660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084733s
	[INFO] 10.244.1.2:48368 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133283s
	[INFO] 10.244.1.2:42909 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130327s
	[INFO] 10.244.1.2:54181 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067193s
	[INFO] 10.244.2.2:36881 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125847s
	[INFO] 10.244.2.2:52948 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090317s
	[INFO] 10.244.1.2:34080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132803s
	[INFO] 10.244.1.2:38625 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147078s
	[INFO] 10.244.2.2:41049 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205078s
	[INFO] 10.244.2.2:47520 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094037s
	[INFO] 10.244.2.2:48004 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000211339s
	[INFO] 10.244.0.4:52706 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087998s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1942&timeout=6m30s&timeoutSeconds=390&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1948&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [e84d839561d002828f6208c0cb29e0f82e06fed050e02288cc99dc4cd01484e7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1648200618]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (04-Aug-2024 01:39:39.797) (total time: 10001ms):
	Trace[1648200618]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:39:49.799)
	Trace[1648200618]: [10.001816969s] [10.001816969s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34500->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:34500->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50406->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50406->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fe75909603216aed8c51c1c0d04758cad62438a67f944a45095e4295bea74ce9] <==
	[INFO] 10.244.2.2:43384 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001982538s
	[INFO] 10.244.2.2:59450 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165578s
	[INFO] 10.244.2.2:44599 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132406s
	[INFO] 10.244.2.2:38280 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086968s
	[INFO] 10.244.0.4:52340 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111664s
	[INFO] 10.244.0.4:55794 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001989197s
	[INFO] 10.244.0.4:56345 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371219s
	[INFO] 10.244.0.4:50778 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090371s
	[INFO] 10.244.0.4:47116 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132729s
	[INFO] 10.244.1.2:54780 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104255s
	[INFO] 10.244.2.2:52086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092312s
	[INFO] 10.244.2.2:36096 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008133s
	[INFO] 10.244.0.4:35645 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000084037s
	[INFO] 10.244.0.4:57031 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00004652s
	[INFO] 10.244.0.4:53264 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005834s
	[INFO] 10.244.0.4:52476 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111362s
	[INFO] 10.244.1.2:39754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000161853s
	[INFO] 10.244.1.2:44320 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018965s
	[INFO] 10.244.2.2:58250 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133355s
	[INFO] 10.244.0.4:34248 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137551s
	[INFO] 10.244.0.4:46858 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000082831s
	[INFO] 10.244.0.4:52801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017483s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1942&timeout=9m36s&timeoutSeconds=576&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-998889
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T01_28_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:28:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:44:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:40:15 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:40:15 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:40:15 +0000   Sun, 04 Aug 2024 01:28:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:40:15 +0000   Sun, 04 Aug 2024 01:28:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    ha-998889
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa9bfc18a8dd4a25ae5d0b652cb98f91
	  System UUID:                fa9bfc18-a8dd-4a25-ae5d-0b652cb98f91
	  Boot ID:                    ddede9e4-4547-41a5-820a-f6568caf06a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v468b              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-b8ds7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-ddb5m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-998889                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-gc22h                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-998889             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-998889    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-56twz                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-998889             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-998889                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m17s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-998889 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-998889 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-998889 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-998889 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-998889 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-998889 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                    node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-998889 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Warning  ContainerGCFailed        5m17s (x2 over 6m17s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	  Normal   RegisteredNode           3m8s                   node-controller  Node ha-998889 event: Registered Node ha-998889 in Controller
	
	
	Name:               ha-998889-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_29_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:29:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:44:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 01:43:11 +0000   Sun, 04 Aug 2024 01:43:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 01:43:11 +0000   Sun, 04 Aug 2024 01:43:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 01:43:11 +0000   Sun, 04 Aug 2024 01:43:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 01:43:11 +0000   Sun, 04 Aug 2024 01:43:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    ha-998889-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8754ed7ba6c04d5d808bf540e4c5a093
	  System UUID:                8754ed7b-a6c0-4d5d-808b-f540e4c5a093
	  Boot ID:                    f010620e-c28e-4dfd-9fd8-683c4880bba4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7jqps                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-998889-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-mm9t2                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-998889-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-998889-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-v4j77                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-998889-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-998889-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m57s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-998889-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-998889-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-998889-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-998889-m02 status is now: NodeNotReady
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m40s (x7 over 4m40s)  kubelet          Node ha-998889-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m39s (x8 over 4m40s)  kubelet          Node ha-998889-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m39s (x8 over 4m40s)  kubelet          Node ha-998889-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-998889-m02 event: Registered Node ha-998889-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-998889-m02 status is now: NodeNotReady
	
	
	Name:               ha-998889-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-998889-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=ha-998889
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T01_31_44_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:31:43 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-998889-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 01:42:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 04 Aug 2024 01:41:45 +0000   Sun, 04 Aug 2024 01:42:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 04 Aug 2024 01:41:45 +0000   Sun, 04 Aug 2024 01:42:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 04 Aug 2024 01:41:45 +0000   Sun, 04 Aug 2024 01:42:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 04 Aug 2024 01:41:45 +0000   Sun, 04 Aug 2024 01:42:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    ha-998889-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e86557b9788446aca3bd64c7bcc82957
	  System UUID:                e86557b9-7884-46ac-a3bd-64c7bcc82957
	  Boot ID:                    cd38eada-249f-443d-b928-f87347c45a30
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rrjmx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-5cv7z              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-9qdn6           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-998889-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-998889-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-998889-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-998889-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   NodeNotReady             3m28s                  node-controller  Node ha-998889-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m8s                   node-controller  Node ha-998889-m04 event: Registered Node ha-998889-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-998889-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-998889-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-998889-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-998889-m04 has been rebooted, boot id: cd38eada-249f-443d-b928-f87347c45a30
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-998889-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-998889-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.869407] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.063774] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058921] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.163748] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.144819] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.274744] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[Aug 4 01:28] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +0.067193] kauditd_printk_skb: 136 callbacks suppressed
	[  +4.231084] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.024644] kauditd_printk_skb: 51 callbacks suppressed
	[  +6.031121] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.102027] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.498623] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.120089] kauditd_printk_skb: 29 callbacks suppressed
	[Aug 4 01:29] kauditd_printk_skb: 26 callbacks suppressed
	[Aug 4 01:39] systemd-fstab-generator[3640]: Ignoring "noauto" option for root device
	[  +0.155417] systemd-fstab-generator[3652]: Ignoring "noauto" option for root device
	[  +0.177356] systemd-fstab-generator[3666]: Ignoring "noauto" option for root device
	[  +0.143690] systemd-fstab-generator[3678]: Ignoring "noauto" option for root device
	[  +0.288412] systemd-fstab-generator[3706]: Ignoring "noauto" option for root device
	[  +3.506403] systemd-fstab-generator[3809]: Ignoring "noauto" option for root device
	[  +5.906362] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.013351] kauditd_printk_skb: 86 callbacks suppressed
	[Aug 4 01:40] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.628974] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [9f5483ec0343a2ae1436604203fed3da83bf10db0889e25d1da15d252965142d] <==
	{"level":"info","ts":"2024-08-04T01:41:07.962593Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ab0e927fe14112bb","to":"7f4b3c159583e07e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-04T01:41:07.962669Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"warn","ts":"2024-08-04T01:41:10.280887Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:10.281156Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"7f4b3c159583e07e","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-04T01:41:11.347525Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.149423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-998889-m03\" ","response":"range_response_count:1 size:6894"}
	{"level":"info","ts":"2024-08-04T01:41:11.347718Z","caller":"traceutil/trace.go:171","msg":"trace[432126751] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-998889-m03; range_end:; response_count:1; response_revision:2415; }","duration":"107.393178ms","start":"2024-08-04T01:41:11.240281Z","end":"2024-08-04T01:41:11.347674Z","steps":["trace[432126751] 'agreement among raft nodes before linearized reading'  (duration: 66.185051ms)","trace[432126751] 'range keys from in-memory index tree'  (duration: 40.904373ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-04T01:41:59.350258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb switched to configuration voters=(12325950308097266363 15068717469949514310)"}
	{"level":"info","ts":"2024-08-04T01:41:59.356509Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"5f0195cf24a31222","local-member-id":"ab0e927fe14112bb","removed-remote-peer-id":"7f4b3c159583e07e","removed-remote-peer-urls":["https://192.168.39.148:2380"]}
	{"level":"info","ts":"2024-08-04T01:41:59.356605Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"warn","ts":"2024-08-04T01:41:59.356977Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:41:59.357038Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"warn","ts":"2024-08-04T01:41:59.357025Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"ab0e927fe14112bb","removed-member-id":"7f4b3c159583e07e"}
	{"level":"warn","ts":"2024-08-04T01:41:59.357217Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-08-04T01:41:59.357228Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:41:59.35729Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:41:59.357386Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"warn","ts":"2024-08-04T01:41:59.357706Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e","error":"context canceled"}
	{"level":"warn","ts":"2024-08-04T01:41:59.357798Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"7f4b3c159583e07e","error":"failed to read 7f4b3c159583e07e on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-04T01:41:59.358Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"warn","ts":"2024-08-04T01:41:59.358385Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e","error":"context canceled"}
	{"level":"info","ts":"2024-08-04T01:41:59.358455Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:41:59.358524Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:41:59.358575Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ab0e927fe14112bb","removed-remote-peer-id":"7f4b3c159583e07e"}
	{"level":"warn","ts":"2024-08-04T01:41:59.372452Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.148:56052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-08-04T01:41:59.37331Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ab0e927fe14112bb","remote-peer-id-stream-handler":"ab0e927fe14112bb","remote-peer-id-from":"7f4b3c159583e07e"}
	
	
	==> etcd [cbd934bafbbf145cfe4829d5d714cb1ea83995d84ed5174711de16ce0b3551e6] <==
	2024/08/04 01:37:52 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-04T01:37:52.165281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.916641232s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-04T01:37:52.188582Z","caller":"traceutil/trace.go:171","msg":"trace[1810148600] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"8.939940048s","start":"2024-08-04T01:37:43.248636Z","end":"2024-08-04T01:37:52.188576Z","steps":["trace[1810148600] 'agreement among raft nodes before linearized reading'  (duration: 8.916641418s)"],"step_count":1}
	2024/08/04 01:37:52 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-04T01:37:52.201182Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":1349832058482900657,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-04T01:37:52.313333Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.12:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T01:37:52.313436Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.12:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T01:37:52.313513Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ab0e927fe14112bb","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-04T01:37:52.313732Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.313773Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.313815Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.313984Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.31411Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.314216Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.314249Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"d11ed6b391105246"}
	{"level":"info","ts":"2024-08-04T01:37:52.314259Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.314272Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.31431Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.31441Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.31449Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.314566Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ab0e927fe14112bb","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.314595Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"7f4b3c159583e07e"}
	{"level":"info","ts":"2024-08-04T01:37:52.318136Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2024-08-04T01:37:52.318247Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2024-08-04T01:37:52.318285Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-998889","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.12:2380"],"advertise-client-urls":["https://192.168.39.12:2379"]}
	
	
	==> kernel <==
	 01:44:34 up 16 min,  0 users,  load average: 0.38, 0.54, 0.37
	Linux ha-998889 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [316140ed791e6600a9053ebd6d92b28bcb6a92ece2fc5d95bb49b3eb952f0e12] <==
	I0804 01:43:45.828202       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:43:55.829376       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:43:55.829674       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:43:55.830009       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:43:55.830040       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:43:55.830137       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:43:55.830158       1 main.go:299] handling current node
	I0804 01:44:05.828724       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:44:05.828992       1 main.go:299] handling current node
	I0804 01:44:05.829049       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:44:05.829074       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:44:05.829373       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:44:05.829407       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:44:15.825991       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:44:15.826097       1 main.go:299] handling current node
	I0804 01:44:15.826127       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:44:15.826144       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:44:15.826292       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:44:15.826313       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:44:25.829755       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:44:25.829977       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:44:25.830162       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:44:25.830193       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:44:25.830307       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:44:25.830345       1 main.go:299] handling current node
	
	
	==> kindnet [e987e973e97a5d8196f6e345e354a4d7d744f255d6f7eb258f4f9abc1b495957] <==
	I0804 01:37:26.892070       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:37:26.892089       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:37:26.892293       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:37:26.892320       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:37:26.892381       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:37:26.892400       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:37:36.892166       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:37:36.892193       1 main.go:299] handling current node
	I0804 01:37:36.892206       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:37:36.892210       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:37:36.892398       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:37:36.892405       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:37:36.892480       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:37:36.892485       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	I0804 01:37:46.892258       1 main.go:295] Handling node with IPs: map[192.168.39.12:{}]
	I0804 01:37:46.892313       1 main.go:299] handling current node
	I0804 01:37:46.892328       1 main.go:295] Handling node with IPs: map[192.168.39.200:{}]
	I0804 01:37:46.892334       1 main.go:322] Node ha-998889-m02 has CIDR [10.244.1.0/24] 
	I0804 01:37:46.892470       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0804 01:37:46.892493       1 main.go:322] Node ha-998889-m03 has CIDR [10.244.2.0/24] 
	I0804 01:37:46.892552       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 01:37:46.892557       1 main.go:322] Node ha-998889-m04 has CIDR [10.244.3.0/24] 
	E0804 01:37:47.235469       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1911&timeout=6m31s&timeoutSeconds=391&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	W0804 01:37:50.307465       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1911": dial tcp 10.96.0.1:443: connect: no route to host
	E0804 01:37:50.307552       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1911": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [1460950dd5a80f135fdd8a7a3f16757474ae1ab676814f9b6515fa267b2b8864] <==
	I0804 01:39:35.140393       1 options.go:221] external host was not specified, using 192.168.39.12
	I0804 01:39:35.142993       1 server.go:148] Version: v1.30.3
	I0804 01:39:35.143039       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:39:36.042285       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0804 01:39:36.051940       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 01:39:36.057009       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0804 01:39:36.057042       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 01:39:36.057301       1 instance.go:299] Using reconciler: lease
	W0804 01:39:56.040102       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0804 01:39:56.040102       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0804 01:39:56.058385       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [ae15bd3bdf8b5e879646ffef26a7b6f6a0249cfe8e6aa38beb38ba1ca80695f3] <==
	I0804 01:40:14.802342       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0804 01:40:14.802803       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0804 01:40:14.802918       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0804 01:40:14.884713       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 01:40:14.885805       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 01:40:14.886624       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 01:40:14.887110       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 01:40:14.889653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 01:40:14.900125       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0804 01:40:14.903111       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.200]
	I0804 01:40:14.903214       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 01:40:14.903397       1 aggregator.go:165] initial CRD sync complete...
	I0804 01:40:14.903431       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 01:40:14.903454       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 01:40:14.903475       1 cache.go:39] Caches are synced for autoregister controller
	I0804 01:40:14.923257       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 01:40:14.929797       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 01:40:14.929878       1 policy_source.go:224] refreshing policies
	I0804 01:40:14.985135       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 01:40:14.987904       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 01:40:15.004324       1 controller.go:615] quota admission added evaluator for: endpoints
	I0804 01:40:15.043047       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0804 01:40:15.050743       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0804 01:40:15.799097       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0804 01:40:16.179134       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.12 192.168.39.200]
	
	
	==> kube-controller-manager [1dd3b269ecfda055748f704827d4acecf0b17f1b0fc525783d8e893cd42f576e] <==
	I0804 01:39:35.694412       1 serving.go:380] Generated self-signed cert in-memory
	I0804 01:39:36.195307       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0804 01:39:36.195466       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:39:36.197138       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0804 01:39:36.197264       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 01:39:36.197291       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0804 01:39:36.197317       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0804 01:39:57.065244       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.12:8443/healthz\": dial tcp 192.168.39.12:8443: connect: connection refused"
	
	
	==> kube-controller-manager [abbde27610f2f5600ab96e13c597a86b72e1bc87c5efe34182b20b810c400f3d] <==
	I0804 01:41:58.225945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.065µs"
	I0804 01:41:58.258365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.179µs"
	I0804 01:41:58.262024       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.79µs"
	I0804 01:41:59.084213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.1269ms"
	I0804 01:41:59.084535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.856µs"
	I0804 01:42:10.996144       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-998889-m04"
	E0804 01:42:11.062599       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-998889-m03", UID:"23f99c7a-b964-4a52-911c-f0c248a77b9e", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-998889-m03", UID:"8885cc9e-1719-47c7-9d78-2bb901e39ed7", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-998889-m03" not found
	E0804 01:42:11.064134       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-998889-m03", UID:"a2154f31-c641-4dbc-843b-5056d6a01c17", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-998889-m03", UID:"8885cc9e-1719-47c7-9d78-2bb901e39ed7", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-998889-m03" not found
	E0804 01:42:27.836939       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	E0804 01:42:27.837073       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	E0804 01:42:27.837102       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	E0804 01:42:27.837129       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	E0804 01:42:27.837152       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	E0804 01:42:47.837657       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	E0804 01:42:47.837769       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	E0804 01:42:47.837796       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	E0804 01:42:47.837820       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	E0804 01:42:47.837909       1 gc_controller.go:153] "Failed to get node" err="node \"ha-998889-m03\" not found" logger="pod-garbage-collector-controller" node="ha-998889-m03"
	I0804 01:42:47.929357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.269435ms"
	I0804 01:42:47.953157       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.510597ms"
	I0804 01:42:47.955096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.42µs"
	I0804 01:42:47.982390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.560244ms"
	I0804 01:42:47.985027       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="205.547µs"
	I0804 01:43:04.000127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.774995ms"
	I0804 01:43:04.000353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.027µs"
	
	
	==> kube-proxy [11f3f2daf285fceb2971c7f383002c058ed68659dff2a69b536dfbc7856419e5] <==
	I0804 01:39:36.073777       1 server_linux.go:69] "Using iptables proxy"
	E0804 01:39:36.547438       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:39:39.619434       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:39:42.692275       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:39:48.836604       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:39:58.051918       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0804 01:40:16.486372       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-998889\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0804 01:40:16.486481       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0804 01:40:16.626901       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 01:40:16.627024       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 01:40:16.627046       1 server_linux.go:165] "Using iptables Proxier"
	I0804 01:40:16.634050       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 01:40:16.635289       1 server.go:872] "Version info" version="v1.30.3"
	I0804 01:40:16.635402       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:40:16.649144       1 config.go:192] "Starting service config controller"
	I0804 01:40:16.649216       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 01:40:16.649328       1 config.go:101] "Starting endpoint slice config controller"
	I0804 01:40:16.649338       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 01:40:16.650505       1 config.go:319] "Starting node config controller"
	I0804 01:40:16.650514       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 01:40:16.749754       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 01:40:16.749816       1 shared_informer.go:320] Caches are synced for service config
	I0804 01:40:16.751375       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e32fb23a61d2d5f39a71a8975ef75e9ff9a47811ec6185cf5abcd26becc84372] <==
	E0804 01:36:23.925011       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:30.563553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:30.563810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:30.564171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:30.564276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:30.564490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:30.564593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:41.443627       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:41.443935       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:44.515293       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:44.515361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:44.515476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:44.515535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:36:59.875533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:36:59.875806       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:02.947571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:02.948594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:06.019341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:06.019570       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:36.739749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:36.739948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1897": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:39.812255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:39.812537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0804 01:37:49.028623       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	E0804 01:37:49.028718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-998889&resourceVersion=1839": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [2d5d507e714a241edc4501b7f500d06d535ea73fde31d3b56e1a89476a0148f8] <==
	W0804 01:40:06.585345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:06.585456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.12:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:06.974066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.12:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:06.974109       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.12:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:07.932683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:07.932743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.12:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:11.719404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	E0804 01:40:11.719496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.12:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.12:8443: connect: connection refused
	W0804 01:40:14.815816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 01:40:14.817352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0804 01:40:14.817556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 01:40:14.817697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 01:40:14.817905       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 01:40:14.817947       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0804 01:40:14.818044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 01:40:14.818072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0804 01:40:14.818108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0804 01:40:14.818133       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0804 01:40:14.818243       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 01:40:14.818273       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0804 01:40:16.276319       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0804 01:41:56.029782       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-rrjmx\": pod busybox-fc5497c4f-rrjmx is already assigned to node \"ha-998889-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-rrjmx" node="ha-998889-m04"
	E0804 01:41:56.029995       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 634712e0-df4b-4255-bc9d-590377054b18(default/busybox-fc5497c4f-rrjmx) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-rrjmx"
	E0804 01:41:56.030042       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-rrjmx\": pod busybox-fc5497c4f-rrjmx is already assigned to node \"ha-998889-m04\"" pod="default/busybox-fc5497c4f-rrjmx"
	I0804 01:41:56.030074       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-rrjmx" node="ha-998889-m04"
	
	
	==> kube-scheduler [3f264e5c2143d8018a76d184839f08d2214cb1f0657bfd59a0b05a039318d2df] <==
	E0804 01:37:48.066956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 01:37:48.334521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0804 01:37:48.334582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0804 01:37:48.485519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0804 01:37:48.485569       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0804 01:37:50.493761       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0804 01:37:50.493814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0804 01:37:50.903135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0804 01:37:50.903243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 01:37:51.094003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0804 01:37:51.094104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0804 01:37:51.679039       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 01:37:51.679140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0804 01:37:52.079787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 01:37:52.079819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0804 01:37:52.084995       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0804 01:37:52.085021       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0804 01:37:52.096155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 01:37:52.096204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 01:37:52.118907       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0804 01:37:52.118971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0804 01:37:52.149451       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0804 01:37:52.150228       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0804 01:37:52.154811       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0804 01:37:52.156269       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 01:40:37 ha-998889 kubelet[1372]: E0804 01:40:37.385245    1372 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b2eb4a37-052e-4e8e-9b0d-d58847698eeb)\"" pod="kube-system/storage-provisioner" podUID="b2eb4a37-052e-4e8e-9b0d-d58847698eeb"
	Aug 04 01:40:39 ha-998889 kubelet[1372]: I0804 01:40:39.280221    1372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-v468b" podStartSLOduration=570.46213119 podStartE2EDuration="9m33.280158566s" podCreationTimestamp="2024-08-04 01:31:06 +0000 UTC" firstStartedPulling="2024-08-04 01:31:07.31195721 +0000 UTC m=+171.072902529" lastFinishedPulling="2024-08-04 01:31:10.129984575 +0000 UTC m=+173.890929905" observedRunningTime="2024-08-04 01:31:11.212674462 +0000 UTC m=+174.973619801" watchObservedRunningTime="2024-08-04 01:40:39.280158566 +0000 UTC m=+743.041103902"
	Aug 04 01:40:51 ha-998889 kubelet[1372]: I0804 01:40:51.386045    1372 scope.go:117] "RemoveContainer" containerID="011410390b0d2117ac8b43c23244f24dd25069ac34a908117a9a9a133c55662c"
	Aug 04 01:41:05 ha-998889 kubelet[1372]: I0804 01:41:05.385975    1372 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-998889" podUID="1baf4284-e439-4cfa-b46f-dc618a37580b"
	Aug 04 01:41:05 ha-998889 kubelet[1372]: I0804 01:41:05.440654    1372 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-998889"
	Aug 04 01:41:16 ha-998889 kubelet[1372]: E0804 01:41:16.428832    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:41:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:41:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:41:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:41:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:42:16 ha-998889 kubelet[1372]: E0804 01:42:16.426416    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:42:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:42:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:42:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:42:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:43:16 ha-998889 kubelet[1372]: E0804 01:43:16.432122    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:43:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:43:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:43:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:43:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 01:44:16 ha-998889 kubelet[1372]: E0804 01:44:16.425713    1372 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 01:44:16 ha-998889 kubelet[1372]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 01:44:16 ha-998889 kubelet[1372]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 01:44:16 ha-998889 kubelet[1372]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 01:44:16 ha-998889 kubelet[1372]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 01:44:32.874282  121139 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19364-90243/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-998889 -n ha-998889
helpers_test.go:261: (dbg) Run:  kubectl --context ha-998889 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-229184
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-229184
E0804 02:01:42.266178   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-229184: exit status 82 (2m1.850239501s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-229184-m03"  ...
	* Stopping node "multinode-229184-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-229184" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229184 --wait=true -v=8 --alsologtostderr
E0804 02:04:45.315948   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-229184 --wait=true -v=8 --alsologtostderr: (3m23.677162636s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-229184
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-229184 -n multinode-229184
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-229184 logs -n 25: (1.640436106s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m02:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3996378525/001/cp-test_multinode-229184-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m02:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184:/home/docker/cp-test_multinode-229184-m02_multinode-229184.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n multinode-229184 sudo cat                                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /home/docker/cp-test_multinode-229184-m02_multinode-229184.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m02:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03:/home/docker/cp-test_multinode-229184-m02_multinode-229184-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n multinode-229184-m03 sudo cat                                   | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /home/docker/cp-test_multinode-229184-m02_multinode-229184-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp testdata/cp-test.txt                                                | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m03:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3996378525/001/cp-test_multinode-229184-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m03:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184:/home/docker/cp-test_multinode-229184-m03_multinode-229184.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n multinode-229184 sudo cat                                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /home/docker/cp-test_multinode-229184-m03_multinode-229184.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m03:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m02:/home/docker/cp-test_multinode-229184-m03_multinode-229184-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n multinode-229184-m02 sudo cat                                   | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /home/docker/cp-test_multinode-229184-m03_multinode-229184-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-229184 node stop m03                                                          | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	| node    | multinode-229184 node start                                                             | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 02:00 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-229184                                                                | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:00 UTC |                     |
	| stop    | -p multinode-229184                                                                     | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:00 UTC |                     |
	| start   | -p multinode-229184                                                                     | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:02 UTC | 04 Aug 24 02:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-229184                                                                | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 02:02:38
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 02:02:38.440729  130743 out.go:291] Setting OutFile to fd 1 ...
	I0804 02:02:38.441001  130743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:02:38.441012  130743 out.go:304] Setting ErrFile to fd 2...
	I0804 02:02:38.441016  130743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:02:38.441180  130743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 02:02:38.441762  130743 out.go:298] Setting JSON to false
	I0804 02:02:38.442661  130743 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13502,"bootTime":1722723456,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 02:02:38.442724  130743 start.go:139] virtualization: kvm guest
	I0804 02:02:38.446318  130743 out.go:177] * [multinode-229184] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 02:02:38.447610  130743 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 02:02:38.447636  130743 notify.go:220] Checking for updates...
	I0804 02:02:38.450463  130743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 02:02:38.451994  130743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 02:02:38.453436  130743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:02:38.454665  130743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 02:02:38.456019  130743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 02:02:38.457634  130743 config.go:182] Loaded profile config "multinode-229184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 02:02:38.457731  130743 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 02:02:38.458250  130743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 02:02:38.458311  130743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:02:38.473574  130743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36945
	I0804 02:02:38.474079  130743 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:02:38.474733  130743 main.go:141] libmachine: Using API Version  1
	I0804 02:02:38.474753  130743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:02:38.475145  130743 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:02:38.475301  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:02:38.510250  130743 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 02:02:38.511511  130743 start.go:297] selected driver: kvm2
	I0804 02:02:38.511526  130743 start.go:901] validating driver "kvm2" against &{Name:multinode-229184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-229184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.152 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:02:38.511822  130743 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 02:02:38.512279  130743 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:02:38.512361  130743 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 02:02:38.527528  130743 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 02:02:38.528284  130743 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 02:02:38.528364  130743 cni.go:84] Creating CNI manager for ""
	I0804 02:02:38.528380  130743 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0804 02:02:38.528452  130743 start.go:340] cluster config:
	{Name:multinode-229184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-229184 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.152 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:02:38.528617  130743 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:02:38.530404  130743 out.go:177] * Starting "multinode-229184" primary control-plane node in "multinode-229184" cluster
	I0804 02:02:38.531650  130743 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 02:02:38.531700  130743 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 02:02:38.531713  130743 cache.go:56] Caching tarball of preloaded images
	I0804 02:02:38.531796  130743 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 02:02:38.531809  130743 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 02:02:38.531947  130743 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/config.json ...
	I0804 02:02:38.532163  130743 start.go:360] acquireMachinesLock for multinode-229184: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 02:02:38.532216  130743 start.go:364] duration metric: took 30.567µs to acquireMachinesLock for "multinode-229184"
	I0804 02:02:38.532237  130743 start.go:96] Skipping create...Using existing machine configuration
	I0804 02:02:38.532248  130743 fix.go:54] fixHost starting: 
	I0804 02:02:38.532508  130743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 02:02:38.532547  130743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:02:38.546901  130743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36319
	I0804 02:02:38.547343  130743 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:02:38.547809  130743 main.go:141] libmachine: Using API Version  1
	I0804 02:02:38.547831  130743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:02:38.548211  130743 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:02:38.548411  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:02:38.548572  130743 main.go:141] libmachine: (multinode-229184) Calling .GetState
	I0804 02:02:38.550155  130743 fix.go:112] recreateIfNeeded on multinode-229184: state=Running err=<nil>
	W0804 02:02:38.550179  130743 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 02:02:38.552182  130743 out.go:177] * Updating the running kvm2 "multinode-229184" VM ...
	I0804 02:02:38.553471  130743 machine.go:94] provisionDockerMachine start ...
	I0804 02:02:38.553491  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:02:38.553685  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:38.556296  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.556750  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.556773  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.556897  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:38.557116  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.557298  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.557446  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:38.557574  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:02:38.557797  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:02:38.557812  130743 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 02:02:38.674715  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-229184
	
	I0804 02:02:38.674749  130743 main.go:141] libmachine: (multinode-229184) Calling .GetMachineName
	I0804 02:02:38.675069  130743 buildroot.go:166] provisioning hostname "multinode-229184"
	I0804 02:02:38.675099  130743 main.go:141] libmachine: (multinode-229184) Calling .GetMachineName
	I0804 02:02:38.675337  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:38.677938  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.678346  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.678379  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.678497  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:38.678675  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.678802  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.678967  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:38.679141  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:02:38.679300  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:02:38.679312  130743 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-229184 && echo "multinode-229184" | sudo tee /etc/hostname
	I0804 02:02:38.810869  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-229184
	
	I0804 02:02:38.810904  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:38.813825  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.814116  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.814147  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.814308  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:38.814508  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.814693  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.814814  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:38.814979  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:02:38.815209  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:02:38.815227  130743 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-229184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-229184/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-229184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 02:02:38.926410  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 02:02:38.926447  130743 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 02:02:38.926479  130743 buildroot.go:174] setting up certificates
	I0804 02:02:38.926491  130743 provision.go:84] configureAuth start
	I0804 02:02:38.926501  130743 main.go:141] libmachine: (multinode-229184) Calling .GetMachineName
	I0804 02:02:38.926790  130743 main.go:141] libmachine: (multinode-229184) Calling .GetIP
	I0804 02:02:38.929285  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.929641  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.929674  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.929803  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:38.932086  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.932416  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.932444  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.932579  130743 provision.go:143] copyHostCerts
	I0804 02:02:38.932617  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 02:02:38.932654  130743 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 02:02:38.932664  130743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 02:02:38.932760  130743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 02:02:38.932935  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 02:02:38.932972  130743 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 02:02:38.932982  130743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 02:02:38.933023  130743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 02:02:38.933108  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 02:02:38.933132  130743 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 02:02:38.933149  130743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 02:02:38.933183  130743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 02:02:38.933265  130743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.multinode-229184 san=[127.0.0.1 192.168.39.183 localhost minikube multinode-229184]
	I0804 02:02:39.149731  130743 provision.go:177] copyRemoteCerts
	I0804 02:02:39.149789  130743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 02:02:39.149829  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:39.152335  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:39.152616  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:39.152663  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:39.152798  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:39.152998  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:39.153169  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:39.153299  130743 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184/id_rsa Username:docker}
	I0804 02:02:39.240704  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 02:02:39.240791  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 02:02:39.267141  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 02:02:39.267221  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0804 02:02:39.292542  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 02:02:39.292631  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 02:02:39.319283  130743 provision.go:87] duration metric: took 392.775488ms to configureAuth
	I0804 02:02:39.319317  130743 buildroot.go:189] setting minikube options for container-runtime
	I0804 02:02:39.319591  130743 config.go:182] Loaded profile config "multinode-229184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 02:02:39.319683  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:39.322292  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:39.322602  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:39.322634  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:39.322749  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:39.322948  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:39.323128  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:39.323277  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:39.323443  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:02:39.323596  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:02:39.323609  130743 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 02:04:10.015183  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 02:04:10.015255  130743 machine.go:97] duration metric: took 1m31.461764886s to provisionDockerMachine
	I0804 02:04:10.015270  130743 start.go:293] postStartSetup for "multinode-229184" (driver="kvm2")
	I0804 02:04:10.015281  130743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 02:04:10.015304  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.015625  130743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 02:04:10.015660  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:04:10.018822  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.019543  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.019574  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.019734  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:04:10.019930  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.020122  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:04:10.020293  130743 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184/id_rsa Username:docker}
	I0804 02:04:10.109752  130743 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 02:04:10.114303  130743 command_runner.go:130] > NAME=Buildroot
	I0804 02:04:10.114367  130743 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0804 02:04:10.114380  130743 command_runner.go:130] > ID=buildroot
	I0804 02:04:10.114388  130743 command_runner.go:130] > VERSION_ID=2023.02.9
	I0804 02:04:10.114401  130743 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0804 02:04:10.114490  130743 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 02:04:10.114517  130743 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 02:04:10.114594  130743 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 02:04:10.114672  130743 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 02:04:10.114684  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 02:04:10.114781  130743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 02:04:10.124428  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 02:04:10.150749  130743 start.go:296] duration metric: took 135.461951ms for postStartSetup
	I0804 02:04:10.150812  130743 fix.go:56] duration metric: took 1m31.618564434s for fixHost
	I0804 02:04:10.150857  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:04:10.153442  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.153877  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.153907  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.154060  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:04:10.154278  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.154460  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.154594  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:04:10.154746  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:04:10.154914  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:04:10.154923  130743 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 02:04:10.266695  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722737050.243907130
	
	I0804 02:04:10.266723  130743 fix.go:216] guest clock: 1722737050.243907130
	I0804 02:04:10.266732  130743 fix.go:229] Guest: 2024-08-04 02:04:10.24390713 +0000 UTC Remote: 2024-08-04 02:04:10.150835405 +0000 UTC m=+91.746146853 (delta=93.071725ms)
	I0804 02:04:10.266777  130743 fix.go:200] guest clock delta is within tolerance: 93.071725ms
	I0804 02:04:10.266793  130743 start.go:83] releasing machines lock for "multinode-229184", held for 1m31.734564683s
	I0804 02:04:10.266825  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.267110  130743 main.go:141] libmachine: (multinode-229184) Calling .GetIP
	I0804 02:04:10.269639  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.270034  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.270077  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.270225  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.270822  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.271028  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.271146  130743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 02:04:10.271199  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:04:10.271344  130743 ssh_runner.go:195] Run: cat /version.json
	I0804 02:04:10.271374  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:04:10.274283  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.274604  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.274642  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.274698  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.274820  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:04:10.275008  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.275136  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:04:10.275200  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.275228  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.275241  130743 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184/id_rsa Username:docker}
	I0804 02:04:10.275402  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:04:10.275572  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.275732  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:04:10.275908  130743 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184/id_rsa Username:docker}
	I0804 02:04:10.374368  130743 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0804 02:04:10.374532  130743 ssh_runner.go:195] Run: systemctl --version
	I0804 02:04:10.398040  130743 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0804 02:04:10.398739  130743 command_runner.go:130] > systemd 252 (252)
	I0804 02:04:10.398763  130743 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0804 02:04:10.398825  130743 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 02:04:10.559561  130743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 02:04:10.566772  130743 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0804 02:04:10.566887  130743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 02:04:10.566966  130743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 02:04:10.578199  130743 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 02:04:10.578236  130743 start.go:495] detecting cgroup driver to use...
	I0804 02:04:10.578308  130743 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 02:04:10.595136  130743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 02:04:10.609618  130743 docker.go:217] disabling cri-docker service (if available) ...
	I0804 02:04:10.609689  130743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 02:04:10.623234  130743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 02:04:10.637203  130743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 02:04:10.787139  130743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 02:04:10.931772  130743 docker.go:233] disabling docker service ...
	I0804 02:04:10.931860  130743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 02:04:10.948362  130743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 02:04:10.962949  130743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 02:04:11.105335  130743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 02:04:11.249895  130743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 02:04:11.264535  130743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 02:04:11.284432  130743 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0804 02:04:11.284905  130743 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 02:04:11.284962  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.295953  130743 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 02:04:11.296025  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.307235  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.317874  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.328444  130743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 02:04:11.339731  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.350722  130743 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.362760  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.373547  130743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 02:04:11.383335  130743 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0804 02:04:11.383428  130743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 02:04:11.392776  130743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 02:04:11.534752  130743 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 02:04:12.412281  130743 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 02:04:12.412374  130743 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 02:04:12.417117  130743 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0804 02:04:12.417139  130743 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0804 02:04:12.417146  130743 command_runner.go:130] > Device: 0,22	Inode: 1335        Links: 1
	I0804 02:04:12.417152  130743 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 02:04:12.417157  130743 command_runner.go:130] > Access: 2024-08-04 02:04:12.277264361 +0000
	I0804 02:04:12.417163  130743 command_runner.go:130] > Modify: 2024-08-04 02:04:12.277264361 +0000
	I0804 02:04:12.417168  130743 command_runner.go:130] > Change: 2024-08-04 02:04:12.277264361 +0000
	I0804 02:04:12.417172  130743 command_runner.go:130] >  Birth: -
	I0804 02:04:12.417286  130743 start.go:563] Will wait 60s for crictl version
	I0804 02:04:12.417331  130743 ssh_runner.go:195] Run: which crictl
	I0804 02:04:12.421040  130743 command_runner.go:130] > /usr/bin/crictl
	I0804 02:04:12.421104  130743 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 02:04:12.460331  130743 command_runner.go:130] > Version:  0.1.0
	I0804 02:04:12.460360  130743 command_runner.go:130] > RuntimeName:  cri-o
	I0804 02:04:12.460367  130743 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0804 02:04:12.460376  130743 command_runner.go:130] > RuntimeApiVersion:  v1
	I0804 02:04:12.460395  130743 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 02:04:12.460478  130743 ssh_runner.go:195] Run: crio --version
	I0804 02:04:12.488467  130743 command_runner.go:130] > crio version 1.29.1
	I0804 02:04:12.488490  130743 command_runner.go:130] > Version:        1.29.1
	I0804 02:04:12.488496  130743 command_runner.go:130] > GitCommit:      unknown
	I0804 02:04:12.488501  130743 command_runner.go:130] > GitCommitDate:  unknown
	I0804 02:04:12.488505  130743 command_runner.go:130] > GitTreeState:   clean
	I0804 02:04:12.488511  130743 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0804 02:04:12.488515  130743 command_runner.go:130] > GoVersion:      go1.21.6
	I0804 02:04:12.488519  130743 command_runner.go:130] > Compiler:       gc
	I0804 02:04:12.488524  130743 command_runner.go:130] > Platform:       linux/amd64
	I0804 02:04:12.488528  130743 command_runner.go:130] > Linkmode:       dynamic
	I0804 02:04:12.488532  130743 command_runner.go:130] > BuildTags:      
	I0804 02:04:12.488537  130743 command_runner.go:130] >   containers_image_ostree_stub
	I0804 02:04:12.488541  130743 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0804 02:04:12.488544  130743 command_runner.go:130] >   btrfs_noversion
	I0804 02:04:12.488548  130743 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0804 02:04:12.488553  130743 command_runner.go:130] >   libdm_no_deferred_remove
	I0804 02:04:12.488559  130743 command_runner.go:130] >   seccomp
	I0804 02:04:12.488563  130743 command_runner.go:130] > LDFlags:          unknown
	I0804 02:04:12.488568  130743 command_runner.go:130] > SeccompEnabled:   true
	I0804 02:04:12.488572  130743 command_runner.go:130] > AppArmorEnabled:  false
	I0804 02:04:12.489733  130743 ssh_runner.go:195] Run: crio --version
	I0804 02:04:12.519326  130743 command_runner.go:130] > crio version 1.29.1
	I0804 02:04:12.519355  130743 command_runner.go:130] > Version:        1.29.1
	I0804 02:04:12.519364  130743 command_runner.go:130] > GitCommit:      unknown
	I0804 02:04:12.519387  130743 command_runner.go:130] > GitCommitDate:  unknown
	I0804 02:04:12.519398  130743 command_runner.go:130] > GitTreeState:   clean
	I0804 02:04:12.519406  130743 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0804 02:04:12.519412  130743 command_runner.go:130] > GoVersion:      go1.21.6
	I0804 02:04:12.519420  130743 command_runner.go:130] > Compiler:       gc
	I0804 02:04:12.519428  130743 command_runner.go:130] > Platform:       linux/amd64
	I0804 02:04:12.519435  130743 command_runner.go:130] > Linkmode:       dynamic
	I0804 02:04:12.519442  130743 command_runner.go:130] > BuildTags:      
	I0804 02:04:12.519450  130743 command_runner.go:130] >   containers_image_ostree_stub
	I0804 02:04:12.519458  130743 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0804 02:04:12.519477  130743 command_runner.go:130] >   btrfs_noversion
	I0804 02:04:12.519485  130743 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0804 02:04:12.519489  130743 command_runner.go:130] >   libdm_no_deferred_remove
	I0804 02:04:12.519493  130743 command_runner.go:130] >   seccomp
	I0804 02:04:12.519497  130743 command_runner.go:130] > LDFlags:          unknown
	I0804 02:04:12.519501  130743 command_runner.go:130] > SeccompEnabled:   true
	I0804 02:04:12.519505  130743 command_runner.go:130] > AppArmorEnabled:  false
	I0804 02:04:12.522218  130743 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 02:04:12.523717  130743 main.go:141] libmachine: (multinode-229184) Calling .GetIP
	I0804 02:04:12.526308  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:12.526700  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:12.526731  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:12.526931  130743 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 02:04:12.531283  130743 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0804 02:04:12.531404  130743 kubeadm.go:883] updating cluster {Name:multinode-229184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-229184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.152 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 02:04:12.531546  130743 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 02:04:12.531598  130743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 02:04:12.574736  130743 command_runner.go:130] > {
	I0804 02:04:12.574767  130743 command_runner.go:130] >   "images": [
	I0804 02:04:12.574773  130743 command_runner.go:130] >     {
	I0804 02:04:12.574786  130743 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0804 02:04:12.574793  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.574803  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0804 02:04:12.574809  130743 command_runner.go:130] >       ],
	I0804 02:04:12.574816  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.574830  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0804 02:04:12.574841  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0804 02:04:12.574847  130743 command_runner.go:130] >       ],
	I0804 02:04:12.574855  130743 command_runner.go:130] >       "size": "87165492",
	I0804 02:04:12.574861  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.574868  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.574878  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.574886  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.574895  130743 command_runner.go:130] >     },
	I0804 02:04:12.574901  130743 command_runner.go:130] >     {
	I0804 02:04:12.574913  130743 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0804 02:04:12.574922  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.574933  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0804 02:04:12.574939  130743 command_runner.go:130] >       ],
	I0804 02:04:12.574949  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.574961  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0804 02:04:12.574985  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0804 02:04:12.574995  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575002  130743 command_runner.go:130] >       "size": "87174707",
	I0804 02:04:12.575008  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.575024  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575035  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575044  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575053  130743 command_runner.go:130] >     },
	I0804 02:04:12.575061  130743 command_runner.go:130] >     {
	I0804 02:04:12.575072  130743 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0804 02:04:12.575080  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575091  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0804 02:04:12.575098  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575105  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575115  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0804 02:04:12.575127  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0804 02:04:12.575134  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575141  130743 command_runner.go:130] >       "size": "1363676",
	I0804 02:04:12.575149  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.575165  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575173  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575182  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575189  130743 command_runner.go:130] >     },
	I0804 02:04:12.575194  130743 command_runner.go:130] >     {
	I0804 02:04:12.575206  130743 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0804 02:04:12.575217  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575227  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0804 02:04:12.575235  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575243  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575259  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0804 02:04:12.575287  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0804 02:04:12.575296  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575304  130743 command_runner.go:130] >       "size": "31470524",
	I0804 02:04:12.575312  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.575320  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575328  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575342  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575350  130743 command_runner.go:130] >     },
	I0804 02:04:12.575357  130743 command_runner.go:130] >     {
	I0804 02:04:12.575369  130743 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0804 02:04:12.575377  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575385  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0804 02:04:12.575393  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575399  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575412  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0804 02:04:12.575425  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0804 02:04:12.575432  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575438  130743 command_runner.go:130] >       "size": "61245718",
	I0804 02:04:12.575446  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.575456  130743 command_runner.go:130] >       "username": "nonroot",
	I0804 02:04:12.575465  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575471  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575479  130743 command_runner.go:130] >     },
	I0804 02:04:12.575486  130743 command_runner.go:130] >     {
	I0804 02:04:12.575496  130743 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0804 02:04:12.575506  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575514  130743 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0804 02:04:12.575521  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575527  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575539  130743 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0804 02:04:12.575554  130743 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0804 02:04:12.575562  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575569  130743 command_runner.go:130] >       "size": "150779692",
	I0804 02:04:12.575579  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.575588  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.575596  130743 command_runner.go:130] >       },
	I0804 02:04:12.575606  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575615  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575624  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575633  130743 command_runner.go:130] >     },
	I0804 02:04:12.575640  130743 command_runner.go:130] >     {
	I0804 02:04:12.575650  130743 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0804 02:04:12.575667  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575679  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0804 02:04:12.575686  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575695  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575708  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0804 02:04:12.575721  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0804 02:04:12.575731  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575738  130743 command_runner.go:130] >       "size": "117609954",
	I0804 02:04:12.575747  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.575757  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.575765  130743 command_runner.go:130] >       },
	I0804 02:04:12.575771  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575779  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575787  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575792  130743 command_runner.go:130] >     },
	I0804 02:04:12.575800  130743 command_runner.go:130] >     {
	I0804 02:04:12.575808  130743 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0804 02:04:12.575814  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575824  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0804 02:04:12.575832  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575839  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575867  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0804 02:04:12.575880  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0804 02:04:12.575888  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575896  130743 command_runner.go:130] >       "size": "112198984",
	I0804 02:04:12.575905  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.575913  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.575922  130743 command_runner.go:130] >       },
	I0804 02:04:12.575930  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575935  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575941  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575945  130743 command_runner.go:130] >     },
	I0804 02:04:12.575950  130743 command_runner.go:130] >     {
	I0804 02:04:12.575958  130743 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0804 02:04:12.575963  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575969  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0804 02:04:12.575976  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575985  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575998  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0804 02:04:12.576012  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0804 02:04:12.576020  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576027  130743 command_runner.go:130] >       "size": "85953945",
	I0804 02:04:12.576037  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.576046  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.576055  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.576064  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.576068  130743 command_runner.go:130] >     },
	I0804 02:04:12.576075  130743 command_runner.go:130] >     {
	I0804 02:04:12.576082  130743 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0804 02:04:12.576088  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.576093  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0804 02:04:12.576102  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576108  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.576116  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0804 02:04:12.576133  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0804 02:04:12.576140  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576145  130743 command_runner.go:130] >       "size": "63051080",
	I0804 02:04:12.576156  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.576163  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.576166  130743 command_runner.go:130] >       },
	I0804 02:04:12.576170  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.576176  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.576182  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.576190  130743 command_runner.go:130] >     },
	I0804 02:04:12.576196  130743 command_runner.go:130] >     {
	I0804 02:04:12.576206  130743 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0804 02:04:12.576216  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.576224  130743 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0804 02:04:12.576232  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576239  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.576253  130743 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0804 02:04:12.576267  130743 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0804 02:04:12.576278  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576284  130743 command_runner.go:130] >       "size": "750414",
	I0804 02:04:12.576293  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.576300  130743 command_runner.go:130] >         "value": "65535"
	I0804 02:04:12.576308  130743 command_runner.go:130] >       },
	I0804 02:04:12.576314  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.576323  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.576330  130743 command_runner.go:130] >       "pinned": true
	I0804 02:04:12.576338  130743 command_runner.go:130] >     }
	I0804 02:04:12.576344  130743 command_runner.go:130] >   ]
	I0804 02:04:12.576351  130743 command_runner.go:130] > }
	I0804 02:04:12.576610  130743 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 02:04:12.576628  130743 crio.go:433] Images already preloaded, skipping extraction
	I0804 02:04:12.576685  130743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 02:04:12.610502  130743 command_runner.go:130] > {
	I0804 02:04:12.610535  130743 command_runner.go:130] >   "images": [
	I0804 02:04:12.610542  130743 command_runner.go:130] >     {
	I0804 02:04:12.610554  130743 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0804 02:04:12.610561  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610570  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0804 02:04:12.610576  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610583  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610608  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0804 02:04:12.610620  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0804 02:04:12.610626  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610634  130743 command_runner.go:130] >       "size": "87165492",
	I0804 02:04:12.610641  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.610646  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.610654  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.610659  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.610663  130743 command_runner.go:130] >     },
	I0804 02:04:12.610666  130743 command_runner.go:130] >     {
	I0804 02:04:12.610672  130743 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0804 02:04:12.610677  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610682  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0804 02:04:12.610689  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610692  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610702  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0804 02:04:12.610709  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0804 02:04:12.610715  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610719  130743 command_runner.go:130] >       "size": "87174707",
	I0804 02:04:12.610723  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.610729  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.610733  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.610737  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.610743  130743 command_runner.go:130] >     },
	I0804 02:04:12.610746  130743 command_runner.go:130] >     {
	I0804 02:04:12.610751  130743 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0804 02:04:12.610757  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610763  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0804 02:04:12.610768  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610774  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610780  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0804 02:04:12.610789  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0804 02:04:12.610793  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610796  130743 command_runner.go:130] >       "size": "1363676",
	I0804 02:04:12.610801  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.610805  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.610811  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.610815  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.610821  130743 command_runner.go:130] >     },
	I0804 02:04:12.610824  130743 command_runner.go:130] >     {
	I0804 02:04:12.610831  130743 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0804 02:04:12.610837  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610842  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0804 02:04:12.610848  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610851  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610864  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0804 02:04:12.610878  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0804 02:04:12.610884  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610888  130743 command_runner.go:130] >       "size": "31470524",
	I0804 02:04:12.610894  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.610898  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.610905  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.610909  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.610915  130743 command_runner.go:130] >     },
	I0804 02:04:12.610918  130743 command_runner.go:130] >     {
	I0804 02:04:12.610927  130743 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0804 02:04:12.610931  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610942  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0804 02:04:12.610947  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610956  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610967  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0804 02:04:12.610981  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0804 02:04:12.610989  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610993  130743 command_runner.go:130] >       "size": "61245718",
	I0804 02:04:12.610999  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.611005  130743 command_runner.go:130] >       "username": "nonroot",
	I0804 02:04:12.611011  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611015  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611021  130743 command_runner.go:130] >     },
	I0804 02:04:12.611024  130743 command_runner.go:130] >     {
	I0804 02:04:12.611032  130743 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0804 02:04:12.611036  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611041  130743 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0804 02:04:12.611047  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611055  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611062  130743 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0804 02:04:12.611071  130743 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0804 02:04:12.611077  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611081  130743 command_runner.go:130] >       "size": "150779692",
	I0804 02:04:12.611087  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611091  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.611097  130743 command_runner.go:130] >       },
	I0804 02:04:12.611101  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611115  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611121  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611124  130743 command_runner.go:130] >     },
	I0804 02:04:12.611130  130743 command_runner.go:130] >     {
	I0804 02:04:12.611136  130743 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0804 02:04:12.611140  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611145  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0804 02:04:12.611150  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611156  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611167  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0804 02:04:12.611177  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0804 02:04:12.611183  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611187  130743 command_runner.go:130] >       "size": "117609954",
	I0804 02:04:12.611193  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611197  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.611203  130743 command_runner.go:130] >       },
	I0804 02:04:12.611208  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611213  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611218  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611222  130743 command_runner.go:130] >     },
	I0804 02:04:12.611226  130743 command_runner.go:130] >     {
	I0804 02:04:12.611237  130743 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0804 02:04:12.611246  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611257  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0804 02:04:12.611266  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611272  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611297  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0804 02:04:12.611313  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0804 02:04:12.611319  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611324  130743 command_runner.go:130] >       "size": "112198984",
	I0804 02:04:12.611330  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611336  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.611341  130743 command_runner.go:130] >       },
	I0804 02:04:12.611347  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611352  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611358  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611363  130743 command_runner.go:130] >     },
	I0804 02:04:12.611368  130743 command_runner.go:130] >     {
	I0804 02:04:12.611380  130743 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0804 02:04:12.611389  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611398  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0804 02:04:12.611404  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611414  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611428  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0804 02:04:12.611441  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0804 02:04:12.611449  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611455  130743 command_runner.go:130] >       "size": "85953945",
	I0804 02:04:12.611464  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.611472  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611480  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611486  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611495  130743 command_runner.go:130] >     },
	I0804 02:04:12.611503  130743 command_runner.go:130] >     {
	I0804 02:04:12.611515  130743 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0804 02:04:12.611532  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611543  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0804 02:04:12.611552  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611561  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611574  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0804 02:04:12.611588  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0804 02:04:12.611597  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611607  130743 command_runner.go:130] >       "size": "63051080",
	I0804 02:04:12.611616  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611625  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.611633  130743 command_runner.go:130] >       },
	I0804 02:04:12.611638  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611641  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611648  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611652  130743 command_runner.go:130] >     },
	I0804 02:04:12.611659  130743 command_runner.go:130] >     {
	I0804 02:04:12.611664  130743 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0804 02:04:12.611668  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611672  130743 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0804 02:04:12.611675  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611679  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611685  130743 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0804 02:04:12.611691  130743 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0804 02:04:12.611696  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611703  130743 command_runner.go:130] >       "size": "750414",
	I0804 02:04:12.611709  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611714  130743 command_runner.go:130] >         "value": "65535"
	I0804 02:04:12.611719  130743 command_runner.go:130] >       },
	I0804 02:04:12.611725  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611731  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611740  130743 command_runner.go:130] >       "pinned": true
	I0804 02:04:12.611746  130743 command_runner.go:130] >     }
	I0804 02:04:12.611754  130743 command_runner.go:130] >   ]
	I0804 02:04:12.611760  130743 command_runner.go:130] > }
	I0804 02:04:12.611910  130743 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 02:04:12.611929  130743 cache_images.go:84] Images are preloaded, skipping loading
	I0804 02:04:12.611938  130743 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.30.3 crio true true} ...
	I0804 02:04:12.612041  130743 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-229184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-229184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 02:04:12.612111  130743 ssh_runner.go:195] Run: crio config
	I0804 02:04:12.658089  130743 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0804 02:04:12.658132  130743 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0804 02:04:12.658144  130743 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0804 02:04:12.658149  130743 command_runner.go:130] > #
	I0804 02:04:12.658163  130743 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0804 02:04:12.658174  130743 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0804 02:04:12.658184  130743 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0804 02:04:12.658206  130743 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0804 02:04:12.658212  130743 command_runner.go:130] > # reload'.
	I0804 02:04:12.658236  130743 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0804 02:04:12.658250  130743 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0804 02:04:12.658259  130743 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0804 02:04:12.658268  130743 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0804 02:04:12.658276  130743 command_runner.go:130] > [crio]
	I0804 02:04:12.658285  130743 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0804 02:04:12.658296  130743 command_runner.go:130] > # containers images, in this directory.
	I0804 02:04:12.658304  130743 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0804 02:04:12.658318  130743 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0804 02:04:12.658329  130743 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0804 02:04:12.658341  130743 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0804 02:04:12.658351  130743 command_runner.go:130] > # imagestore = ""
	I0804 02:04:12.658359  130743 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0804 02:04:12.658370  130743 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0804 02:04:12.658380  130743 command_runner.go:130] > storage_driver = "overlay"
	I0804 02:04:12.658389  130743 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0804 02:04:12.658401  130743 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0804 02:04:12.658407  130743 command_runner.go:130] > storage_option = [
	I0804 02:04:12.658417  130743 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0804 02:04:12.658423  130743 command_runner.go:130] > ]
	I0804 02:04:12.658435  130743 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0804 02:04:12.658447  130743 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0804 02:04:12.658456  130743 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0804 02:04:12.658465  130743 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0804 02:04:12.658477  130743 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0804 02:04:12.658484  130743 command_runner.go:130] > # always happen on a node reboot
	I0804 02:04:12.658492  130743 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0804 02:04:12.658505  130743 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0804 02:04:12.658520  130743 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0804 02:04:12.658532  130743 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0804 02:04:12.658543  130743 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0804 02:04:12.658559  130743 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0804 02:04:12.658575  130743 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0804 02:04:12.658586  130743 command_runner.go:130] > # internal_wipe = true
	I0804 02:04:12.658599  130743 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0804 02:04:12.658610  130743 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0804 02:04:12.658616  130743 command_runner.go:130] > # internal_repair = false
	I0804 02:04:12.658626  130743 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0804 02:04:12.658636  130743 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0804 02:04:12.658648  130743 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0804 02:04:12.658659  130743 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0804 02:04:12.658671  130743 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0804 02:04:12.658680  130743 command_runner.go:130] > [crio.api]
	I0804 02:04:12.658688  130743 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0804 02:04:12.658699  130743 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0804 02:04:12.658712  130743 command_runner.go:130] > # IP address on which the stream server will listen.
	I0804 02:04:12.658723  130743 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0804 02:04:12.658735  130743 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0804 02:04:12.658745  130743 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0804 02:04:12.658752  130743 command_runner.go:130] > # stream_port = "0"
	I0804 02:04:12.658761  130743 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0804 02:04:12.658771  130743 command_runner.go:130] > # stream_enable_tls = false
	I0804 02:04:12.658780  130743 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0804 02:04:12.658789  130743 command_runner.go:130] > # stream_idle_timeout = ""
	I0804 02:04:12.658798  130743 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0804 02:04:12.658808  130743 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0804 02:04:12.658815  130743 command_runner.go:130] > # minutes.
	I0804 02:04:12.658821  130743 command_runner.go:130] > # stream_tls_cert = ""
	I0804 02:04:12.658833  130743 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0804 02:04:12.658842  130743 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0804 02:04:12.658851  130743 command_runner.go:130] > # stream_tls_key = ""
	I0804 02:04:12.658860  130743 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0804 02:04:12.658872  130743 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0804 02:04:12.658895  130743 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0804 02:04:12.658904  130743 command_runner.go:130] > # stream_tls_ca = ""
	I0804 02:04:12.658916  130743 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0804 02:04:12.658925  130743 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0804 02:04:12.658936  130743 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0804 02:04:12.658947  130743 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0804 02:04:12.658957  130743 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0804 02:04:12.658968  130743 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0804 02:04:12.658977  130743 command_runner.go:130] > [crio.runtime]
	I0804 02:04:12.658985  130743 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0804 02:04:12.658997  130743 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0804 02:04:12.659006  130743 command_runner.go:130] > # "nofile=1024:2048"
	I0804 02:04:12.659014  130743 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0804 02:04:12.659022  130743 command_runner.go:130] > # default_ulimits = [
	I0804 02:04:12.659025  130743 command_runner.go:130] > # ]
	I0804 02:04:12.659031  130743 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0804 02:04:12.659037  130743 command_runner.go:130] > # no_pivot = false
	I0804 02:04:12.659047  130743 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0804 02:04:12.659059  130743 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0804 02:04:12.659066  130743 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0804 02:04:12.659078  130743 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0804 02:04:12.659088  130743 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0804 02:04:12.659097  130743 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0804 02:04:12.659112  130743 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0804 02:04:12.659119  130743 command_runner.go:130] > # Cgroup setting for conmon
	I0804 02:04:12.659131  130743 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0804 02:04:12.659140  130743 command_runner.go:130] > conmon_cgroup = "pod"
	I0804 02:04:12.659150  130743 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0804 02:04:12.659161  130743 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0804 02:04:12.659171  130743 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0804 02:04:12.659179  130743 command_runner.go:130] > conmon_env = [
	I0804 02:04:12.659190  130743 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0804 02:04:12.659200  130743 command_runner.go:130] > ]
	I0804 02:04:12.659208  130743 command_runner.go:130] > # Additional environment variables to set for all the
	I0804 02:04:12.659220  130743 command_runner.go:130] > # containers. These are overridden if set in the
	I0804 02:04:12.659229  130743 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0804 02:04:12.659238  130743 command_runner.go:130] > # default_env = [
	I0804 02:04:12.659243  130743 command_runner.go:130] > # ]
	I0804 02:04:12.659254  130743 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0804 02:04:12.659268  130743 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0804 02:04:12.659276  130743 command_runner.go:130] > # selinux = false
	I0804 02:04:12.659285  130743 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0804 02:04:12.659298  130743 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0804 02:04:12.659307  130743 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0804 02:04:12.659316  130743 command_runner.go:130] > # seccomp_profile = ""
	I0804 02:04:12.659324  130743 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0804 02:04:12.659338  130743 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0804 02:04:12.659348  130743 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0804 02:04:12.659358  130743 command_runner.go:130] > # which might increase security.
	I0804 02:04:12.659366  130743 command_runner.go:130] > # This option is currently deprecated,
	I0804 02:04:12.659378  130743 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0804 02:04:12.659388  130743 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0804 02:04:12.659397  130743 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0804 02:04:12.659412  130743 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0804 02:04:12.659425  130743 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0804 02:04:12.659438  130743 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0804 02:04:12.659447  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.659459  130743 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0804 02:04:12.659468  130743 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0804 02:04:12.659476  130743 command_runner.go:130] > # the cgroup blockio controller.
	I0804 02:04:12.659483  130743 command_runner.go:130] > # blockio_config_file = ""
	I0804 02:04:12.659496  130743 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0804 02:04:12.659505  130743 command_runner.go:130] > # blockio parameters.
	I0804 02:04:12.659511  130743 command_runner.go:130] > # blockio_reload = false
	I0804 02:04:12.659522  130743 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0804 02:04:12.659532  130743 command_runner.go:130] > # irqbalance daemon.
	I0804 02:04:12.659540  130743 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0804 02:04:12.659550  130743 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0804 02:04:12.659565  130743 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0804 02:04:12.659579  130743 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0804 02:04:12.659592  130743 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0804 02:04:12.659607  130743 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0804 02:04:12.659617  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.659627  130743 command_runner.go:130] > # rdt_config_file = ""
	I0804 02:04:12.659635  130743 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0804 02:04:12.659646  130743 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0804 02:04:12.659674  130743 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0804 02:04:12.659684  130743 command_runner.go:130] > # separate_pull_cgroup = ""
	I0804 02:04:12.659694  130743 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0804 02:04:12.659706  130743 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0804 02:04:12.659714  130743 command_runner.go:130] > # will be added.
	I0804 02:04:12.659721  130743 command_runner.go:130] > # default_capabilities = [
	I0804 02:04:12.659727  130743 command_runner.go:130] > # 	"CHOWN",
	I0804 02:04:12.659736  130743 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0804 02:04:12.659742  130743 command_runner.go:130] > # 	"FSETID",
	I0804 02:04:12.659751  130743 command_runner.go:130] > # 	"FOWNER",
	I0804 02:04:12.659757  130743 command_runner.go:130] > # 	"SETGID",
	I0804 02:04:12.659766  130743 command_runner.go:130] > # 	"SETUID",
	I0804 02:04:12.659773  130743 command_runner.go:130] > # 	"SETPCAP",
	I0804 02:04:12.659785  130743 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0804 02:04:12.659794  130743 command_runner.go:130] > # 	"KILL",
	I0804 02:04:12.659800  130743 command_runner.go:130] > # ]
	I0804 02:04:12.659818  130743 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0804 02:04:12.659832  130743 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0804 02:04:12.659843  130743 command_runner.go:130] > # add_inheritable_capabilities = false
	I0804 02:04:12.659856  130743 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0804 02:04:12.659867  130743 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0804 02:04:12.659875  130743 command_runner.go:130] > default_sysctls = [
	I0804 02:04:12.659884  130743 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0804 02:04:12.659892  130743 command_runner.go:130] > ]
	I0804 02:04:12.659899  130743 command_runner.go:130] > # List of devices on the host that a
	I0804 02:04:12.659911  130743 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0804 02:04:12.659920  130743 command_runner.go:130] > # allowed_devices = [
	I0804 02:04:12.659926  130743 command_runner.go:130] > # 	"/dev/fuse",
	I0804 02:04:12.659935  130743 command_runner.go:130] > # ]
	I0804 02:04:12.659943  130743 command_runner.go:130] > # List of additional devices. specified as
	I0804 02:04:12.659958  130743 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0804 02:04:12.659970  130743 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0804 02:04:12.659982  130743 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0804 02:04:12.659991  130743 command_runner.go:130] > # additional_devices = [
	I0804 02:04:12.659997  130743 command_runner.go:130] > # ]
	I0804 02:04:12.660006  130743 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0804 02:04:12.660017  130743 command_runner.go:130] > # cdi_spec_dirs = [
	I0804 02:04:12.660027  130743 command_runner.go:130] > # 	"/etc/cdi",
	I0804 02:04:12.660034  130743 command_runner.go:130] > # 	"/var/run/cdi",
	I0804 02:04:12.660039  130743 command_runner.go:130] > # ]
	I0804 02:04:12.660049  130743 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0804 02:04:12.660059  130743 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0804 02:04:12.660065  130743 command_runner.go:130] > # Defaults to false.
	I0804 02:04:12.660074  130743 command_runner.go:130] > # device_ownership_from_security_context = false
	I0804 02:04:12.660090  130743 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0804 02:04:12.660103  130743 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0804 02:04:12.660123  130743 command_runner.go:130] > # hooks_dir = [
	I0804 02:04:12.660132  130743 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0804 02:04:12.660140  130743 command_runner.go:130] > # ]
	I0804 02:04:12.660151  130743 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0804 02:04:12.660164  130743 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0804 02:04:12.660175  130743 command_runner.go:130] > # its default mounts from the following two files:
	I0804 02:04:12.660183  130743 command_runner.go:130] > #
	I0804 02:04:12.660193  130743 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0804 02:04:12.660205  130743 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0804 02:04:12.660215  130743 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0804 02:04:12.660225  130743 command_runner.go:130] > #
	I0804 02:04:12.660236  130743 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0804 02:04:12.660249  130743 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0804 02:04:12.660261  130743 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0804 02:04:12.660273  130743 command_runner.go:130] > #      only add mounts it finds in this file.
	I0804 02:04:12.660281  130743 command_runner.go:130] > #
	I0804 02:04:12.660288  130743 command_runner.go:130] > # default_mounts_file = ""
	I0804 02:04:12.660299  130743 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0804 02:04:12.660311  130743 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0804 02:04:12.660317  130743 command_runner.go:130] > pids_limit = 1024
	I0804 02:04:12.660329  130743 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0804 02:04:12.660342  130743 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0804 02:04:12.660355  130743 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0804 02:04:12.660370  130743 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0804 02:04:12.660380  130743 command_runner.go:130] > # log_size_max = -1
	I0804 02:04:12.660390  130743 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0804 02:04:12.660400  130743 command_runner.go:130] > # log_to_journald = false
	I0804 02:04:12.660409  130743 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0804 02:04:12.660422  130743 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0804 02:04:12.660434  130743 command_runner.go:130] > # Path to directory for container attach sockets.
	I0804 02:04:12.660444  130743 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0804 02:04:12.660452  130743 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0804 02:04:12.660461  130743 command_runner.go:130] > # bind_mount_prefix = ""
	I0804 02:04:12.660470  130743 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0804 02:04:12.660479  130743 command_runner.go:130] > # read_only = false
	I0804 02:04:12.660491  130743 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0804 02:04:12.660502  130743 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0804 02:04:12.660510  130743 command_runner.go:130] > # live configuration reload.
	I0804 02:04:12.660519  130743 command_runner.go:130] > # log_level = "info"
	I0804 02:04:12.660530  130743 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0804 02:04:12.660543  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.660553  130743 command_runner.go:130] > # log_filter = ""
	I0804 02:04:12.660564  130743 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0804 02:04:12.660582  130743 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0804 02:04:12.660591  130743 command_runner.go:130] > # separated by comma.
	I0804 02:04:12.660602  130743 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 02:04:12.660612  130743 command_runner.go:130] > # uid_mappings = ""
	I0804 02:04:12.660622  130743 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0804 02:04:12.660632  130743 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0804 02:04:12.660638  130743 command_runner.go:130] > # separated by comma.
	I0804 02:04:12.660650  130743 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 02:04:12.660660  130743 command_runner.go:130] > # gid_mappings = ""
	I0804 02:04:12.660670  130743 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0804 02:04:12.660683  130743 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0804 02:04:12.660693  130743 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0804 02:04:12.660710  130743 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 02:04:12.660722  130743 command_runner.go:130] > # minimum_mappable_uid = -1
	I0804 02:04:12.660732  130743 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0804 02:04:12.660745  130743 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0804 02:04:12.660760  130743 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0804 02:04:12.660772  130743 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 02:04:12.660779  130743 command_runner.go:130] > # minimum_mappable_gid = -1
	I0804 02:04:12.660790  130743 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0804 02:04:12.660804  130743 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0804 02:04:12.660812  130743 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0804 02:04:12.660823  130743 command_runner.go:130] > # ctr_stop_timeout = 30
	I0804 02:04:12.660833  130743 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0804 02:04:12.660846  130743 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0804 02:04:12.660860  130743 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0804 02:04:12.660871  130743 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0804 02:04:12.660881  130743 command_runner.go:130] > drop_infra_ctr = false
	I0804 02:04:12.660890  130743 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0804 02:04:12.660904  130743 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0804 02:04:12.660915  130743 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0804 02:04:12.660922  130743 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0804 02:04:12.660935  130743 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0804 02:04:12.660948  130743 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0804 02:04:12.660959  130743 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0804 02:04:12.660970  130743 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0804 02:04:12.660980  130743 command_runner.go:130] > # shared_cpuset = ""
	I0804 02:04:12.660990  130743 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0804 02:04:12.661001  130743 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0804 02:04:12.661012  130743 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0804 02:04:12.661022  130743 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0804 02:04:12.661032  130743 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0804 02:04:12.661042  130743 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0804 02:04:12.661054  130743 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0804 02:04:12.661063  130743 command_runner.go:130] > # enable_criu_support = false
	I0804 02:04:12.661076  130743 command_runner.go:130] > # Enable/disable the generation of the container,
	I0804 02:04:12.661089  130743 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0804 02:04:12.661099  130743 command_runner.go:130] > # enable_pod_events = false
	I0804 02:04:12.661117  130743 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0804 02:04:12.661132  130743 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0804 02:04:12.661145  130743 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0804 02:04:12.661155  130743 command_runner.go:130] > # default_runtime = "runc"
	I0804 02:04:12.661163  130743 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0804 02:04:12.661177  130743 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0804 02:04:12.661195  130743 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0804 02:04:12.661207  130743 command_runner.go:130] > # creation as a file is not desired either.
	I0804 02:04:12.661223  130743 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0804 02:04:12.661234  130743 command_runner.go:130] > # the hostname is being managed dynamically.
	I0804 02:04:12.661240  130743 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0804 02:04:12.661248  130743 command_runner.go:130] > # ]
	I0804 02:04:12.661258  130743 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0804 02:04:12.661271  130743 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0804 02:04:12.661283  130743 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0804 02:04:12.661295  130743 command_runner.go:130] > # Each entry in the table should follow the format:
	I0804 02:04:12.661303  130743 command_runner.go:130] > #
	I0804 02:04:12.661310  130743 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0804 02:04:12.661321  130743 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0804 02:04:12.661368  130743 command_runner.go:130] > # runtime_type = "oci"
	I0804 02:04:12.661393  130743 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0804 02:04:12.661402  130743 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0804 02:04:12.661414  130743 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0804 02:04:12.661424  130743 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0804 02:04:12.661433  130743 command_runner.go:130] > # monitor_env = []
	I0804 02:04:12.661440  130743 command_runner.go:130] > # privileged_without_host_devices = false
	I0804 02:04:12.661451  130743 command_runner.go:130] > # allowed_annotations = []
	I0804 02:04:12.661460  130743 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0804 02:04:12.661468  130743 command_runner.go:130] > # Where:
	I0804 02:04:12.661478  130743 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0804 02:04:12.661491  130743 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0804 02:04:12.661504  130743 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0804 02:04:12.661517  130743 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0804 02:04:12.661526  130743 command_runner.go:130] > #   in $PATH.
	I0804 02:04:12.661537  130743 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0804 02:04:12.661548  130743 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0804 02:04:12.661558  130743 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0804 02:04:12.661566  130743 command_runner.go:130] > #   state.
	I0804 02:04:12.661578  130743 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0804 02:04:12.661591  130743 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0804 02:04:12.661603  130743 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0804 02:04:12.661614  130743 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0804 02:04:12.661623  130743 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0804 02:04:12.661635  130743 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0804 02:04:12.661643  130743 command_runner.go:130] > #   The currently recognized values are:
	I0804 02:04:12.661656  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0804 02:04:12.661670  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0804 02:04:12.661682  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0804 02:04:12.661694  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0804 02:04:12.661708  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0804 02:04:12.661722  130743 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0804 02:04:12.661734  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0804 02:04:12.661749  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0804 02:04:12.661763  130743 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0804 02:04:12.661775  130743 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0804 02:04:12.661784  130743 command_runner.go:130] > #   deprecated option "conmon".
	I0804 02:04:12.661803  130743 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0804 02:04:12.661813  130743 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0804 02:04:12.661824  130743 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0804 02:04:12.661835  130743 command_runner.go:130] > #   should be moved to the container's cgroup
	I0804 02:04:12.661848  130743 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0804 02:04:12.661859  130743 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0804 02:04:12.661869  130743 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0804 02:04:12.661880  130743 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0804 02:04:12.661887  130743 command_runner.go:130] > #
	I0804 02:04:12.661894  130743 command_runner.go:130] > # Using the seccomp notifier feature:
	I0804 02:04:12.661903  130743 command_runner.go:130] > #
	I0804 02:04:12.661911  130743 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0804 02:04:12.661924  130743 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0804 02:04:12.661932  130743 command_runner.go:130] > #
	I0804 02:04:12.661942  130743 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0804 02:04:12.661955  130743 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0804 02:04:12.661963  130743 command_runner.go:130] > #
	I0804 02:04:12.661973  130743 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0804 02:04:12.661981  130743 command_runner.go:130] > # feature.
	I0804 02:04:12.661986  130743 command_runner.go:130] > #
	I0804 02:04:12.661998  130743 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0804 02:04:12.662010  130743 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0804 02:04:12.662023  130743 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0804 02:04:12.662034  130743 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0804 02:04:12.662046  130743 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0804 02:04:12.662054  130743 command_runner.go:130] > #
	I0804 02:04:12.662061  130743 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0804 02:04:12.662073  130743 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0804 02:04:12.662081  130743 command_runner.go:130] > #
	I0804 02:04:12.662090  130743 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0804 02:04:12.662101  130743 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0804 02:04:12.662114  130743 command_runner.go:130] > #
	I0804 02:04:12.662124  130743 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0804 02:04:12.662134  130743 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0804 02:04:12.662142  130743 command_runner.go:130] > # limitation.
	I0804 02:04:12.662153  130743 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0804 02:04:12.662164  130743 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0804 02:04:12.662174  130743 command_runner.go:130] > runtime_type = "oci"
	I0804 02:04:12.662180  130743 command_runner.go:130] > runtime_root = "/run/runc"
	I0804 02:04:12.662189  130743 command_runner.go:130] > runtime_config_path = ""
	I0804 02:04:12.662195  130743 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0804 02:04:12.662205  130743 command_runner.go:130] > monitor_cgroup = "pod"
	I0804 02:04:12.662214  130743 command_runner.go:130] > monitor_exec_cgroup = ""
	I0804 02:04:12.662223  130743 command_runner.go:130] > monitor_env = [
	I0804 02:04:12.662231  130743 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0804 02:04:12.662239  130743 command_runner.go:130] > ]
	I0804 02:04:12.662246  130743 command_runner.go:130] > privileged_without_host_devices = false
	I0804 02:04:12.662258  130743 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0804 02:04:12.662269  130743 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0804 02:04:12.662281  130743 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0804 02:04:12.662296  130743 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0804 02:04:12.662310  130743 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0804 02:04:12.662321  130743 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0804 02:04:12.662338  130743 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0804 02:04:12.662354  130743 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0804 02:04:12.662367  130743 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0804 02:04:12.662379  130743 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0804 02:04:12.662387  130743 command_runner.go:130] > # Example:
	I0804 02:04:12.662396  130743 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0804 02:04:12.662403  130743 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0804 02:04:12.662411  130743 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0804 02:04:12.662419  130743 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0804 02:04:12.662424  130743 command_runner.go:130] > # cpuset = 0
	I0804 02:04:12.662430  130743 command_runner.go:130] > # cpushares = "0-1"
	I0804 02:04:12.662435  130743 command_runner.go:130] > # Where:
	I0804 02:04:12.662441  130743 command_runner.go:130] > # The workload name is workload-type.
	I0804 02:04:12.662448  130743 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0804 02:04:12.662453  130743 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0804 02:04:12.662458  130743 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0804 02:04:12.662465  130743 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0804 02:04:12.662471  130743 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0804 02:04:12.662476  130743 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0804 02:04:12.662483  130743 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0804 02:04:12.662488  130743 command_runner.go:130] > # Default value is set to true
	I0804 02:04:12.662492  130743 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0804 02:04:12.662497  130743 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0804 02:04:12.662501  130743 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0804 02:04:12.662505  130743 command_runner.go:130] > # Default value is set to 'false'
	I0804 02:04:12.662509  130743 command_runner.go:130] > # disable_hostport_mapping = false
	I0804 02:04:12.662515  130743 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0804 02:04:12.662518  130743 command_runner.go:130] > #
	I0804 02:04:12.662524  130743 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0804 02:04:12.662531  130743 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0804 02:04:12.662537  130743 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0804 02:04:12.662543  130743 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0804 02:04:12.662548  130743 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0804 02:04:12.662552  130743 command_runner.go:130] > [crio.image]
	I0804 02:04:12.662557  130743 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0804 02:04:12.662561  130743 command_runner.go:130] > # default_transport = "docker://"
	I0804 02:04:12.662566  130743 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0804 02:04:12.662572  130743 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0804 02:04:12.662576  130743 command_runner.go:130] > # global_auth_file = ""
	I0804 02:04:12.662580  130743 command_runner.go:130] > # The image used to instantiate infra containers.
	I0804 02:04:12.662585  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.662589  130743 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0804 02:04:12.662595  130743 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0804 02:04:12.662601  130743 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0804 02:04:12.662608  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.662612  130743 command_runner.go:130] > # pause_image_auth_file = ""
	I0804 02:04:12.662619  130743 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0804 02:04:12.662625  130743 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0804 02:04:12.662634  130743 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0804 02:04:12.662642  130743 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0804 02:04:12.662648  130743 command_runner.go:130] > # pause_command = "/pause"
	I0804 02:04:12.662654  130743 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0804 02:04:12.662662  130743 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0804 02:04:12.662668  130743 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0804 02:04:12.662678  130743 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0804 02:04:12.662686  130743 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0804 02:04:12.662692  130743 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0804 02:04:12.662698  130743 command_runner.go:130] > # pinned_images = [
	I0804 02:04:12.662702  130743 command_runner.go:130] > # ]
	I0804 02:04:12.662708  130743 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0804 02:04:12.662717  130743 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0804 02:04:12.662723  130743 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0804 02:04:12.662731  130743 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0804 02:04:12.662736  130743 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0804 02:04:12.662741  130743 command_runner.go:130] > # signature_policy = ""
	I0804 02:04:12.662747  130743 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0804 02:04:12.662755  130743 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0804 02:04:12.662762  130743 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0804 02:04:12.662769  130743 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0804 02:04:12.662774  130743 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0804 02:04:12.662779  130743 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0804 02:04:12.662785  130743 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0804 02:04:12.662793  130743 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0804 02:04:12.662797  130743 command_runner.go:130] > # changing them here.
	I0804 02:04:12.662802  130743 command_runner.go:130] > # insecure_registries = [
	I0804 02:04:12.662805  130743 command_runner.go:130] > # ]
	I0804 02:04:12.662813  130743 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0804 02:04:12.662818  130743 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0804 02:04:12.662824  130743 command_runner.go:130] > # image_volumes = "mkdir"
	I0804 02:04:12.662829  130743 command_runner.go:130] > # Temporary directory to use for storing big files
	I0804 02:04:12.662833  130743 command_runner.go:130] > # big_files_temporary_dir = ""
	I0804 02:04:12.662839  130743 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0804 02:04:12.662845  130743 command_runner.go:130] > # CNI plugins.
	I0804 02:04:12.662849  130743 command_runner.go:130] > [crio.network]
	I0804 02:04:12.662858  130743 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0804 02:04:12.662866  130743 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0804 02:04:12.662870  130743 command_runner.go:130] > # cni_default_network = ""
	I0804 02:04:12.662876  130743 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0804 02:04:12.662881  130743 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0804 02:04:12.662886  130743 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0804 02:04:12.662891  130743 command_runner.go:130] > # plugin_dirs = [
	I0804 02:04:12.662897  130743 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0804 02:04:12.662901  130743 command_runner.go:130] > # ]
	I0804 02:04:12.662907  130743 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0804 02:04:12.662912  130743 command_runner.go:130] > [crio.metrics]
	I0804 02:04:12.662917  130743 command_runner.go:130] > # Globally enable or disable metrics support.
	I0804 02:04:12.662923  130743 command_runner.go:130] > enable_metrics = true
	I0804 02:04:12.662927  130743 command_runner.go:130] > # Specify enabled metrics collectors.
	I0804 02:04:12.662934  130743 command_runner.go:130] > # Per default all metrics are enabled.
	I0804 02:04:12.662941  130743 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0804 02:04:12.662949  130743 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0804 02:04:12.662955  130743 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0804 02:04:12.662961  130743 command_runner.go:130] > # metrics_collectors = [
	I0804 02:04:12.662965  130743 command_runner.go:130] > # 	"operations",
	I0804 02:04:12.662969  130743 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0804 02:04:12.662973  130743 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0804 02:04:12.662978  130743 command_runner.go:130] > # 	"operations_errors",
	I0804 02:04:12.662981  130743 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0804 02:04:12.662986  130743 command_runner.go:130] > # 	"image_pulls_by_name",
	I0804 02:04:12.662991  130743 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0804 02:04:12.662998  130743 command_runner.go:130] > # 	"image_pulls_failures",
	I0804 02:04:12.663002  130743 command_runner.go:130] > # 	"image_pulls_successes",
	I0804 02:04:12.663007  130743 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0804 02:04:12.663010  130743 command_runner.go:130] > # 	"image_layer_reuse",
	I0804 02:04:12.663015  130743 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0804 02:04:12.663018  130743 command_runner.go:130] > # 	"containers_oom_total",
	I0804 02:04:12.663022  130743 command_runner.go:130] > # 	"containers_oom",
	I0804 02:04:12.663026  130743 command_runner.go:130] > # 	"processes_defunct",
	I0804 02:04:12.663031  130743 command_runner.go:130] > # 	"operations_total",
	I0804 02:04:12.663035  130743 command_runner.go:130] > # 	"operations_latency_seconds",
	I0804 02:04:12.663041  130743 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0804 02:04:12.663046  130743 command_runner.go:130] > # 	"operations_errors_total",
	I0804 02:04:12.663052  130743 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0804 02:04:12.663056  130743 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0804 02:04:12.663062  130743 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0804 02:04:12.663067  130743 command_runner.go:130] > # 	"image_pulls_success_total",
	I0804 02:04:12.663070  130743 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0804 02:04:12.663075  130743 command_runner.go:130] > # 	"containers_oom_count_total",
	I0804 02:04:12.663080  130743 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0804 02:04:12.663084  130743 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0804 02:04:12.663087  130743 command_runner.go:130] > # ]
	I0804 02:04:12.663092  130743 command_runner.go:130] > # The port on which the metrics server will listen.
	I0804 02:04:12.663098  130743 command_runner.go:130] > # metrics_port = 9090
	I0804 02:04:12.663102  130743 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0804 02:04:12.663109  130743 command_runner.go:130] > # metrics_socket = ""
	I0804 02:04:12.663117  130743 command_runner.go:130] > # The certificate for the secure metrics server.
	I0804 02:04:12.663123  130743 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0804 02:04:12.663131  130743 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0804 02:04:12.663135  130743 command_runner.go:130] > # certificate on any modification event.
	I0804 02:04:12.663139  130743 command_runner.go:130] > # metrics_cert = ""
	I0804 02:04:12.663145  130743 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0804 02:04:12.663150  130743 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0804 02:04:12.663154  130743 command_runner.go:130] > # metrics_key = ""
	I0804 02:04:12.663159  130743 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0804 02:04:12.663165  130743 command_runner.go:130] > [crio.tracing]
	I0804 02:04:12.663171  130743 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0804 02:04:12.663176  130743 command_runner.go:130] > # enable_tracing = false
	I0804 02:04:12.663181  130743 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0804 02:04:12.663188  130743 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0804 02:04:12.663195  130743 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0804 02:04:12.663201  130743 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0804 02:04:12.663206  130743 command_runner.go:130] > # CRI-O NRI configuration.
	I0804 02:04:12.663211  130743 command_runner.go:130] > [crio.nri]
	I0804 02:04:12.663215  130743 command_runner.go:130] > # Globally enable or disable NRI.
	I0804 02:04:12.663219  130743 command_runner.go:130] > # enable_nri = false
	I0804 02:04:12.663223  130743 command_runner.go:130] > # NRI socket to listen on.
	I0804 02:04:12.663227  130743 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0804 02:04:12.663236  130743 command_runner.go:130] > # NRI plugin directory to use.
	I0804 02:04:12.663243  130743 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0804 02:04:12.663254  130743 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0804 02:04:12.663263  130743 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0804 02:04:12.663272  130743 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0804 02:04:12.663281  130743 command_runner.go:130] > # nri_disable_connections = false
	I0804 02:04:12.663299  130743 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0804 02:04:12.663310  130743 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0804 02:04:12.663316  130743 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0804 02:04:12.663322  130743 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0804 02:04:12.663328  130743 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0804 02:04:12.663334  130743 command_runner.go:130] > [crio.stats]
	I0804 02:04:12.663340  130743 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0804 02:04:12.663347  130743 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0804 02:04:12.663351  130743 command_runner.go:130] > # stats_collection_period = 0
	I0804 02:04:12.663968  130743 command_runner.go:130] ! time="2024-08-04 02:04:12.627368364Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0804 02:04:12.663997  130743 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0804 02:04:12.664170  130743 cni.go:84] Creating CNI manager for ""
	I0804 02:04:12.664183  130743 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0804 02:04:12.664193  130743 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 02:04:12.664214  130743 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-229184 NodeName:multinode-229184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 02:04:12.664420  130743 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-229184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 02:04:12.664485  130743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 02:04:12.675495  130743 command_runner.go:130] > kubeadm
	I0804 02:04:12.675516  130743 command_runner.go:130] > kubectl
	I0804 02:04:12.675520  130743 command_runner.go:130] > kubelet
	I0804 02:04:12.675549  130743 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 02:04:12.675601  130743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 02:04:12.686341  130743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0804 02:04:12.703403  130743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 02:04:12.720323  130743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0804 02:04:12.736973  130743 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0804 02:04:12.740884  130743 command_runner.go:130] > 192.168.39.183	control-plane.minikube.internal
	I0804 02:04:12.740969  130743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 02:04:12.889057  130743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 02:04:12.904426  130743 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184 for IP: 192.168.39.183
	I0804 02:04:12.904451  130743 certs.go:194] generating shared ca certs ...
	I0804 02:04:12.904465  130743 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:04:12.904644  130743 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 02:04:12.904729  130743 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 02:04:12.904742  130743 certs.go:256] generating profile certs ...
	I0804 02:04:12.904841  130743 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/client.key
	I0804 02:04:12.904920  130743 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.key.8b2c4c64
	I0804 02:04:12.904975  130743 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.key
	I0804 02:04:12.904994  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 02:04:12.905015  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 02:04:12.905033  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 02:04:12.905051  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 02:04:12.905067  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 02:04:12.905098  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 02:04:12.905116  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 02:04:12.905134  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 02:04:12.905199  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 02:04:12.905240  130743 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 02:04:12.905256  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 02:04:12.905286  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 02:04:12.905320  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 02:04:12.905350  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 02:04:12.905427  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 02:04:12.905467  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:12.905487  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 02:04:12.905504  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 02:04:12.906150  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 02:04:12.931137  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 02:04:12.956076  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 02:04:12.981546  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 02:04:13.006435  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 02:04:13.029966  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 02:04:13.055503  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 02:04:13.081867  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 02:04:13.106886  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 02:04:13.131014  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 02:04:13.155467  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 02:04:13.186244  130743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 02:04:13.219177  130743 ssh_runner.go:195] Run: openssl version
	I0804 02:04:13.232715  130743 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0804 02:04:13.232791  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 02:04:13.287380  130743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:13.296276  130743 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:13.296327  130743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:13.296387  130743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:13.306244  130743 command_runner.go:130] > b5213941
	I0804 02:04:13.306359  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 02:04:13.320254  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 02:04:13.338752  130743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 02:04:13.343684  130743 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 02:04:13.343855  130743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 02:04:13.343909  130743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 02:04:13.349982  130743 command_runner.go:130] > 51391683
	I0804 02:04:13.350224  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 02:04:13.360979  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 02:04:13.374801  130743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 02:04:13.384089  130743 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 02:04:13.384239  130743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 02:04:13.384291  130743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 02:04:13.390521  130743 command_runner.go:130] > 3ec20f2e
	I0804 02:04:13.390592  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 02:04:13.417950  130743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 02:04:13.423972  130743 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 02:04:13.424006  130743 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0804 02:04:13.424013  130743 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0804 02:04:13.424019  130743 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 02:04:13.424026  130743 command_runner.go:130] > Access: 2024-08-04 01:57:10.923482530 +0000
	I0804 02:04:13.424030  130743 command_runner.go:130] > Modify: 2024-08-04 01:57:10.923482530 +0000
	I0804 02:04:13.424035  130743 command_runner.go:130] > Change: 2024-08-04 01:57:10.923482530 +0000
	I0804 02:04:13.424040  130743 command_runner.go:130] >  Birth: 2024-08-04 01:57:10.923482530 +0000
	I0804 02:04:13.424111  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 02:04:13.432144  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.435416  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 02:04:13.450083  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.450356  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 02:04:13.457299  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.457655  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 02:04:13.464561  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.464691  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 02:04:13.472512  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.472643  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 02:04:13.482576  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.482733  130743 kubeadm.go:392] StartCluster: {Name:multinode-229184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-229184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.152 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:04:13.482900  130743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 02:04:13.482970  130743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 02:04:13.540361  130743 command_runner.go:130] > b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c
	I0804 02:04:13.540387  130743 command_runner.go:130] > 19e85822cc0c4868dd92301e8ff26e66d1d874d9d1105ccf4cea0d34541573f1
	I0804 02:04:13.540393  130743 command_runner.go:130] > 0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e
	I0804 02:04:13.540411  130743 command_runner.go:130] > 68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb
	I0804 02:04:13.540419  130743 command_runner.go:130] > 3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6
	I0804 02:04:13.540429  130743 command_runner.go:130] > bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b
	I0804 02:04:13.540437  130743 command_runner.go:130] > 997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e
	I0804 02:04:13.540463  130743 command_runner.go:130] > b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a
	I0804 02:04:13.540475  130743 command_runner.go:130] > f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc
	I0804 02:04:13.542799  130743 cri.go:89] found id: "b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c"
	I0804 02:04:13.542820  130743 cri.go:89] found id: "19e85822cc0c4868dd92301e8ff26e66d1d874d9d1105ccf4cea0d34541573f1"
	I0804 02:04:13.542824  130743 cri.go:89] found id: "0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e"
	I0804 02:04:13.542827  130743 cri.go:89] found id: "68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb"
	I0804 02:04:13.542829  130743 cri.go:89] found id: "3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6"
	I0804 02:04:13.542832  130743 cri.go:89] found id: "bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b"
	I0804 02:04:13.542835  130743 cri.go:89] found id: "997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e"
	I0804 02:04:13.542838  130743 cri.go:89] found id: "b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a"
	I0804 02:04:13.542841  130743 cri.go:89] found id: "f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc"
	I0804 02:04:13.542846  130743 cri.go:89] found id: ""
	I0804 02:04:13.542891  130743 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.787536212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722737162787512959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46550ea6-2286-4b12-be01-558230f47054 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.788183974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ded573a-1288-4733-972e-34c1ea01b6ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.788258986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ded573a-1288-4733-972e-34c1ea01b6ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.788647709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5779444b83313779cfe35c1c1e8cdbcb4dc33e22d1707d372e59a152713519,PodSandboxId:ac217b95bd857dd46870cb52cfe9a3af2dd715b40f766080eaa262deaeb87505,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722737091191312821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722737066493126365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175,PodSandboxId:7052ee9c14022804099b61be920796b7c44e7ce28fee5f05f3cc9dca0e05fa09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722737058307746152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7
a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f62ca3dc87b1d55d1e7581ef02b8a673ac64ef60b2a5b773b821dd8eb68e22,PodSandboxId:187b59dc0d2555548baf408a7377a3d6dc8012bbd166b49f4503798ecf22bfff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722737057988087887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},A
nnotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9,PodSandboxId:a69d4dc36c963700445f8ea55778c190b275dc1ff71c60228df1aadcb82a477f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722737057872022854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.ku
bernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738,PodSandboxId:12511f4c9f62542117eadbac185c1d4ac7f808f486a9a17d70683e6d0d95a2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722737057659260339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5,PodSandboxId:0b8d18faaf50fabb0f6e0f8eefb5f5a8dce93f3ade9bd44388f99eca0bee6e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722737057652976760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotations:map[string]string{io.kubernetes.container.hash: be863e
03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b,PodSandboxId:efd1f26e59a206be34098b25a32cacfc8cc4bddc577d1bf865e04732224c613b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722737057570464687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:map[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf,PodSandboxId:35cc64ddca94fb5b2044e4cdd2cd0d9da22b51749d10c5c3848bdf8a650f6478,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722737057453814004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722737053391526637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451755c9cae308862cc45dc834fd0544214391121372e8cfe19cb08fbc1e582f,PodSandboxId:9091e3232b4e4c61b5a0f7ca9d22dae51d7726484ce11102aed2f4f347a28d0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722736729040610772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e,PodSandboxId:2ff7b863562642710d449f303b9798cdb87b3a9cb80e48efaf9721781347fe4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722736669916495479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb,PodSandboxId:f14c29a7d94b4927bf72f76b367543d9a40f8181f1e07d9fdf876b83300ea60b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722736657937507958,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.kubernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6,PodSandboxId:f2e81613fe5ae2e71ee14f1b4d6fa5c59a00b1a2682ddd5fef092a507f507ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722736654134846162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e,PodSandboxId:5f06df713675f6bf928a9fc4849f46aa38d82f97b93ef78bc288760ae73d7f6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722736634756657985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a,PodSandboxId:7d2c2feafa63903e31519edfc8cf521d792380c3be4bae0ab6bc962b6509875f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722736634741132492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotation
s:map[string]string{io.kubernetes.container.hash: be863e03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b,PodSandboxId:4ba53ac02e903d556ba72f1d01291672d68cecf7e0a78fa1018c2aef70e094a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722736634791718840,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc,PodSandboxId:21908fae5b9cf1674a348dc5b96270ad7f0d1e7a0ba0b3f16f9fb2cb03c63f9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722736634710016617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ded573a-1288-4733-972e-34c1ea01b6ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.835187066Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=049fe068-40c3-4a4d-a757-c131980c53d8 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.835268668Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=049fe068-40c3-4a4d-a757-c131980c53d8 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.836529673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc9cb523-f19f-4272-921a-464afca439d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.837336655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722737162837312220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc9cb523-f19f-4272-921a-464afca439d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.837936469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2f2d8ec-e059-46f8-83e4-3fee69ebca08 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.837989541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2f2d8ec-e059-46f8-83e4-3fee69ebca08 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.838578218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5779444b83313779cfe35c1c1e8cdbcb4dc33e22d1707d372e59a152713519,PodSandboxId:ac217b95bd857dd46870cb52cfe9a3af2dd715b40f766080eaa262deaeb87505,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722737091191312821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722737066493126365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175,PodSandboxId:7052ee9c14022804099b61be920796b7c44e7ce28fee5f05f3cc9dca0e05fa09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722737058307746152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7
a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f62ca3dc87b1d55d1e7581ef02b8a673ac64ef60b2a5b773b821dd8eb68e22,PodSandboxId:187b59dc0d2555548baf408a7377a3d6dc8012bbd166b49f4503798ecf22bfff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722737057988087887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},A
nnotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9,PodSandboxId:a69d4dc36c963700445f8ea55778c190b275dc1ff71c60228df1aadcb82a477f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722737057872022854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.ku
bernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738,PodSandboxId:12511f4c9f62542117eadbac185c1d4ac7f808f486a9a17d70683e6d0d95a2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722737057659260339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5,PodSandboxId:0b8d18faaf50fabb0f6e0f8eefb5f5a8dce93f3ade9bd44388f99eca0bee6e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722737057652976760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotations:map[string]string{io.kubernetes.container.hash: be863e
03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b,PodSandboxId:efd1f26e59a206be34098b25a32cacfc8cc4bddc577d1bf865e04732224c613b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722737057570464687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:map[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf,PodSandboxId:35cc64ddca94fb5b2044e4cdd2cd0d9da22b51749d10c5c3848bdf8a650f6478,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722737057453814004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722737053391526637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451755c9cae308862cc45dc834fd0544214391121372e8cfe19cb08fbc1e582f,PodSandboxId:9091e3232b4e4c61b5a0f7ca9d22dae51d7726484ce11102aed2f4f347a28d0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722736729040610772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e,PodSandboxId:2ff7b863562642710d449f303b9798cdb87b3a9cb80e48efaf9721781347fe4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722736669916495479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb,PodSandboxId:f14c29a7d94b4927bf72f76b367543d9a40f8181f1e07d9fdf876b83300ea60b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722736657937507958,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.kubernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6,PodSandboxId:f2e81613fe5ae2e71ee14f1b4d6fa5c59a00b1a2682ddd5fef092a507f507ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722736654134846162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e,PodSandboxId:5f06df713675f6bf928a9fc4849f46aa38d82f97b93ef78bc288760ae73d7f6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722736634756657985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a,PodSandboxId:7d2c2feafa63903e31519edfc8cf521d792380c3be4bae0ab6bc962b6509875f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722736634741132492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotation
s:map[string]string{io.kubernetes.container.hash: be863e03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b,PodSandboxId:4ba53ac02e903d556ba72f1d01291672d68cecf7e0a78fa1018c2aef70e094a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722736634791718840,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc,PodSandboxId:21908fae5b9cf1674a348dc5b96270ad7f0d1e7a0ba0b3f16f9fb2cb03c63f9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722736634710016617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2f2d8ec-e059-46f8-83e4-3fee69ebca08 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.881367394Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78e7619a-441e-408f-863b-167a67343620 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.881679274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78e7619a-441e-408f-863b-167a67343620 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.883334034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5e8e3d2-2b86-443a-a6b9-9d3605fd38f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.883890747Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722737162883866014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5e8e3d2-2b86-443a-a6b9-9d3605fd38f9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.884497329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d7b75bb-c8da-43ad-bdc2-8fbb324b0397 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.884579028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d7b75bb-c8da-43ad-bdc2-8fbb324b0397 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.885295611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5779444b83313779cfe35c1c1e8cdbcb4dc33e22d1707d372e59a152713519,PodSandboxId:ac217b95bd857dd46870cb52cfe9a3af2dd715b40f766080eaa262deaeb87505,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722737091191312821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722737066493126365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175,PodSandboxId:7052ee9c14022804099b61be920796b7c44e7ce28fee5f05f3cc9dca0e05fa09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722737058307746152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7
a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f62ca3dc87b1d55d1e7581ef02b8a673ac64ef60b2a5b773b821dd8eb68e22,PodSandboxId:187b59dc0d2555548baf408a7377a3d6dc8012bbd166b49f4503798ecf22bfff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722737057988087887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},A
nnotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9,PodSandboxId:a69d4dc36c963700445f8ea55778c190b275dc1ff71c60228df1aadcb82a477f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722737057872022854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.ku
bernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738,PodSandboxId:12511f4c9f62542117eadbac185c1d4ac7f808f486a9a17d70683e6d0d95a2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722737057659260339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5,PodSandboxId:0b8d18faaf50fabb0f6e0f8eefb5f5a8dce93f3ade9bd44388f99eca0bee6e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722737057652976760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotations:map[string]string{io.kubernetes.container.hash: be863e
03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b,PodSandboxId:efd1f26e59a206be34098b25a32cacfc8cc4bddc577d1bf865e04732224c613b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722737057570464687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:map[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf,PodSandboxId:35cc64ddca94fb5b2044e4cdd2cd0d9da22b51749d10c5c3848bdf8a650f6478,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722737057453814004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722737053391526637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451755c9cae308862cc45dc834fd0544214391121372e8cfe19cb08fbc1e582f,PodSandboxId:9091e3232b4e4c61b5a0f7ca9d22dae51d7726484ce11102aed2f4f347a28d0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722736729040610772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e,PodSandboxId:2ff7b863562642710d449f303b9798cdb87b3a9cb80e48efaf9721781347fe4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722736669916495479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb,PodSandboxId:f14c29a7d94b4927bf72f76b367543d9a40f8181f1e07d9fdf876b83300ea60b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722736657937507958,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.kubernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6,PodSandboxId:f2e81613fe5ae2e71ee14f1b4d6fa5c59a00b1a2682ddd5fef092a507f507ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722736654134846162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e,PodSandboxId:5f06df713675f6bf928a9fc4849f46aa38d82f97b93ef78bc288760ae73d7f6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722736634756657985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a,PodSandboxId:7d2c2feafa63903e31519edfc8cf521d792380c3be4bae0ab6bc962b6509875f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722736634741132492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotation
s:map[string]string{io.kubernetes.container.hash: be863e03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b,PodSandboxId:4ba53ac02e903d556ba72f1d01291672d68cecf7e0a78fa1018c2aef70e094a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722736634791718840,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc,PodSandboxId:21908fae5b9cf1674a348dc5b96270ad7f0d1e7a0ba0b3f16f9fb2cb03c63f9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722736634710016617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d7b75bb-c8da-43ad-bdc2-8fbb324b0397 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.929462309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=486c4223-ab1a-48c3-bd85-3d5812a2aeb0 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.929570106Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=486c4223-ab1a-48c3-bd85-3d5812a2aeb0 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.930783424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d8a6dd1-2378-43c4-9e14-e1b3132e4e1e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.931410618Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722737162931278074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d8a6dd1-2378-43c4-9e14-e1b3132e4e1e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.932080785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=983491fc-30cd-46b4-a7e1-747a98cf70d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.932139703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=983491fc-30cd-46b4-a7e1-747a98cf70d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:06:02 multinode-229184 crio[2889]: time="2024-08-04 02:06:02.932499697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5779444b83313779cfe35c1c1e8cdbcb4dc33e22d1707d372e59a152713519,PodSandboxId:ac217b95bd857dd46870cb52cfe9a3af2dd715b40f766080eaa262deaeb87505,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722737091191312821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722737066493126365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175,PodSandboxId:7052ee9c14022804099b61be920796b7c44e7ce28fee5f05f3cc9dca0e05fa09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722737058307746152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7
a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f62ca3dc87b1d55d1e7581ef02b8a673ac64ef60b2a5b773b821dd8eb68e22,PodSandboxId:187b59dc0d2555548baf408a7377a3d6dc8012bbd166b49f4503798ecf22bfff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722737057988087887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},A
nnotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9,PodSandboxId:a69d4dc36c963700445f8ea55778c190b275dc1ff71c60228df1aadcb82a477f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722737057872022854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.ku
bernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738,PodSandboxId:12511f4c9f62542117eadbac185c1d4ac7f808f486a9a17d70683e6d0d95a2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722737057659260339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5,PodSandboxId:0b8d18faaf50fabb0f6e0f8eefb5f5a8dce93f3ade9bd44388f99eca0bee6e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722737057652976760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotations:map[string]string{io.kubernetes.container.hash: be863e
03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b,PodSandboxId:efd1f26e59a206be34098b25a32cacfc8cc4bddc577d1bf865e04732224c613b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722737057570464687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:map[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf,PodSandboxId:35cc64ddca94fb5b2044e4cdd2cd0d9da22b51749d10c5c3848bdf8a650f6478,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722737057453814004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722737053391526637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451755c9cae308862cc45dc834fd0544214391121372e8cfe19cb08fbc1e582f,PodSandboxId:9091e3232b4e4c61b5a0f7ca9d22dae51d7726484ce11102aed2f4f347a28d0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722736729040610772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e,PodSandboxId:2ff7b863562642710d449f303b9798cdb87b3a9cb80e48efaf9721781347fe4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722736669916495479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb,PodSandboxId:f14c29a7d94b4927bf72f76b367543d9a40f8181f1e07d9fdf876b83300ea60b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722736657937507958,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.kubernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6,PodSandboxId:f2e81613fe5ae2e71ee14f1b4d6fa5c59a00b1a2682ddd5fef092a507f507ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722736654134846162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e,PodSandboxId:5f06df713675f6bf928a9fc4849f46aa38d82f97b93ef78bc288760ae73d7f6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722736634756657985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a,PodSandboxId:7d2c2feafa63903e31519edfc8cf521d792380c3be4bae0ab6bc962b6509875f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722736634741132492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotation
s:map[string]string{io.kubernetes.container.hash: be863e03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b,PodSandboxId:4ba53ac02e903d556ba72f1d01291672d68cecf7e0a78fa1018c2aef70e094a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722736634791718840,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc,PodSandboxId:21908fae5b9cf1674a348dc5b96270ad7f0d1e7a0ba0b3f16f9fb2cb03c63f9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722736634710016617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=983491fc-30cd-46b4-a7e1-747a98cf70d7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2d5779444b833       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   ac217b95bd857       busybox-fc5497c4f-jq4l7
	069c0ab9ae296       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   914062abfbb28       coredns-7db6d8ff4d-s8kfn
	de2d1594af7cd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   7052ee9c14022       kube-proxy-cnd2r
	24f62ca3dc87b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   187b59dc0d255       storage-provisioner
	fbac06e11821b       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   a69d4dc36c963       kindnet-85878
	43c8f76c44337       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   12511f4c9f625       kube-controller-manager-multinode-229184
	c90d734c1552d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   0b8d18faaf50f       etcd-multinode-229184
	768428b12453d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   efd1f26e59a20       kube-apiserver-multinode-229184
	2d83f840bcd2d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   35cc64ddca94f       kube-scheduler-multinode-229184
	b7d560c128154       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   914062abfbb28       coredns-7db6d8ff4d-s8kfn
	451755c9cae30       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   9091e3232b4e4       busybox-fc5497c4f-jq4l7
	0f8e8d602fa18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   2ff7b86356264       storage-provisioner
	68dc307aba765       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   f14c29a7d94b4       kindnet-85878
	3eb91b14876af       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   f2e81613fe5ae       kube-proxy-cnd2r
	bcdd0c1a35983       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   4ba53ac02e903       kube-controller-manager-multinode-229184
	997af80342f16       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   5f06df713675f       kube-scheduler-multinode-229184
	b7c7ca7827fb9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   7d2c2feafa639       etcd-multinode-229184
	f19c91e30619a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   21908fae5b9cf       kube-apiserver-multinode-229184
	
	
	==> coredns [069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43518 - 52193 "HINFO IN 6497034716295087957.8583571617719234661. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0144819s
	
	
	==> coredns [b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47718 - 54294 "HINFO IN 7286318690051177686.7691211596556706498. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015977015s
	
	
	==> describe nodes <==
	Name:               multinode-229184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=multinode-229184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T01_57_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:57:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229184
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 02:05:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 02:04:25 +0000   Sun, 04 Aug 2024 01:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 02:04:25 +0000   Sun, 04 Aug 2024 01:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 02:04:25 +0000   Sun, 04 Aug 2024 01:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 02:04:25 +0000   Sun, 04 Aug 2024 01:57:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    multinode-229184
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc307d996e9243b285c82774ea0fb47c
	  System UUID:                dc307d99-6e92-43b2-85c8-2774ea0fb47c
	  Boot ID:                    603f0dbd-bdd0-4a81-80ff-c63c2f5b26f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jq4l7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 coredns-7db6d8ff4d-s8kfn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m30s
	  kube-system                 etcd-multinode-229184                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m44s
	  kube-system                 kindnet-85878                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m30s
	  kube-system                 kube-apiserver-multinode-229184             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m45s
	  kube-system                 kube-controller-manager-multinode-229184    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m44s
	  kube-system                 kube-proxy-cnd2r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-scheduler-multinode-229184             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m44s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m28s  kube-proxy       
	  Normal  Starting                 102s   kube-proxy       
	  Normal  NodeAllocatableEnforced  8m44s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m44s  kubelet          Node multinode-229184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m44s  kubelet          Node multinode-229184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m44s  kubelet          Node multinode-229184 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m44s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m31s  node-controller  Node multinode-229184 event: Registered Node multinode-229184 in Controller
	  Normal  NodeReady                8m14s  kubelet          Node multinode-229184 status is now: NodeReady
	  Normal  Starting                 98s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s    kubelet          Node multinode-229184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s    kubelet          Node multinode-229184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s    kubelet          Node multinode-229184 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s    node-controller  Node multinode-229184 event: Registered Node multinode-229184 in Controller
	
	
	Name:               multinode-229184-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229184-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=multinode-229184
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T02_05_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 02:05:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229184-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 02:05:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 02:05:33 +0000   Sun, 04 Aug 2024 02:05:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 02:05:33 +0000   Sun, 04 Aug 2024 02:05:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 02:05:33 +0000   Sun, 04 Aug 2024 02:05:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 02:05:33 +0000   Sun, 04 Aug 2024 02:05:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    multinode-229184-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbef89f3c761447ea37b3747483f1a85
	  System UUID:                cbef89f3-c761-447e-a37b-3747483f1a85
	  Boot ID:                    f0030be3-092a-4ccb-842e-a557f03824f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mccck    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-v7wgl              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m41s
	  kube-system                 kube-proxy-jfj5c           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  Starting                 7m35s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  7m41s (x2 over 7m41s)  kubelet     Node multinode-229184-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m41s (x2 over 7m41s)  kubelet     Node multinode-229184-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m41s (x2 over 7m41s)  kubelet     Node multinode-229184-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m41s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 7m41s                  kubelet     Starting kubelet.
	  Normal  NodeReady                7m20s                  kubelet     Node multinode-229184-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-229184-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-229184-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-229184-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 61s                    kubelet     Starting kubelet.
	  Normal  NodeReady                42s                    kubelet     Node multinode-229184-m02 status is now: NodeReady
	
	
	Name:               multinode-229184-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229184-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=multinode-229184
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T02_05_41_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 02:05:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229184-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 02:06:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 02:05:59 +0000   Sun, 04 Aug 2024 02:05:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 02:05:59 +0000   Sun, 04 Aug 2024 02:05:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 02:05:59 +0000   Sun, 04 Aug 2024 02:05:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 02:05:59 +0000   Sun, 04 Aug 2024 02:05:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.152
	  Hostname:    multinode-229184-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1fc6ab60fff424f8deabed11a15c945
	  System UUID:                a1fc6ab6-0fff-424f-8dea-bed11a15c945
	  Boot ID:                    8e00b56a-2e99-4f7e-a3f8-73eaf8459108
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24bvr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m39s
	  kube-system                 kube-proxy-mhj4m    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m35s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m44s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m40s (x4 over 6m40s)  kubelet     Node multinode-229184-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x4 over 6m40s)  kubelet     Node multinode-229184-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x4 over 6m40s)  kubelet     Node multinode-229184-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m20s                  kubelet     Node multinode-229184-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  5m50s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet     Node multinode-229184-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet     Node multinode-229184-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet     Node multinode-229184-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m30s                  kubelet     Node multinode-229184-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-229184-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-229184-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-229184-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-229184-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.170549] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.168857] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.282844] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +4.307765] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.057105] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.539026] systemd-fstab-generator[957]: Ignoring "noauto" option for root device
	[  +0.503504] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.546567] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.075897] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.205582] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.450535] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[  +5.351809] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 4 01:58] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 4 02:04] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.148867] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.170780] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.140751] systemd-fstab-generator[2846]: Ignoring "noauto" option for root device
	[  +0.288346] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +1.349100] systemd-fstab-generator[2974]: Ignoring "noauto" option for root device
	[  +4.562732] kauditd_printk_skb: 132 callbacks suppressed
	[  +7.564648] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[  +0.106092] kauditd_printk_skb: 62 callbacks suppressed
	[  +8.550367] kauditd_printk_skb: 19 callbacks suppressed
	[  +2.918495] systemd-fstab-generator[4007]: Ignoring "noauto" option for root device
	[ +14.642146] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a] <==
	{"level":"info","ts":"2024-08-04T01:58:26.620166Z","caller":"traceutil/trace.go:171","msg":"trace[1185917055] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:485; }","duration":"245.795882ms","start":"2024-08-04T01:58:26.374359Z","end":"2024-08-04T01:58:26.620155Z","steps":["trace[1185917055] 'agreement among raft nodes before linearized reading'  (duration: 245.553535ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T01:59:23.337707Z","caller":"traceutil/trace.go:171","msg":"trace[1902374710] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"301.442443ms","start":"2024-08-04T01:59:23.03624Z","end":"2024-08-04T01:59:23.337683Z","steps":["trace[1902374710] 'process raft request'  (duration: 301.095742ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T01:59:23.338784Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T01:59:23.03622Z","time spent":"301.974674ms","remote":"127.0.0.1:47674","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":925,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-dsv65\" mod_revision:0 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-dsv65\" value_size:871 >> failure:<>"}
	{"level":"warn","ts":"2024-08-04T01:59:23.726457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.455292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4097872256555623048 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-dsv65\" mod_revision:591 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-dsv65\" value_size:2296 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-dsv65\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-04T01:59:23.726642Z","caller":"traceutil/trace.go:171","msg":"trace[604818588] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"318.123459ms","start":"2024-08-04T01:59:23.408506Z","end":"2024-08-04T01:59:23.72663Z","steps":["trace[604818588] 'process raft request'  (duration: 138.180768ms)","trace[604818588] 'compare'  (duration: 179.071415ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-04T01:59:23.726727Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T01:59:23.40849Z","time spent":"318.200448ms","remote":"127.0.0.1:47674","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2350,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-dsv65\" mod_revision:591 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-dsv65\" value_size:2296 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-dsv65\" > >"}
	{"level":"warn","ts":"2024-08-04T01:59:24.010545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.046613ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4097872256555623052 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:38de911b1aed228b>","response":"size:41"}
	{"level":"info","ts":"2024-08-04T01:59:24.011021Z","caller":"traceutil/trace.go:171","msg":"trace[788522474] linearizableReadLoop","detail":"{readStateIndex:633; appliedIndex:631; }","duration":"204.538095ms","start":"2024-08-04T01:59:23.806471Z","end":"2024-08-04T01:59:24.011009Z","steps":["trace[788522474] 'read index received'  (duration: 51.942023ms)","trace[788522474] 'applied index is now lower than readState.Index'  (duration: 152.595282ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-04T01:59:24.011252Z","caller":"traceutil/trace.go:171","msg":"trace[65515223] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"204.847388ms","start":"2024-08-04T01:59:23.806396Z","end":"2024-08-04T01:59:24.011244Z","steps":["trace[65515223] 'process raft request'  (duration: 204.379021ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T01:59:24.011339Z","caller":"traceutil/trace.go:171","msg":"trace[1948321475] transaction","detail":"{read_only:false; number_of_response:1; response_revision:594; }","duration":"203.900075ms","start":"2024-08-04T01:59:23.807434Z","end":"2024-08-04T01:59:24.011334Z","steps":["trace[1948321475] 'process raft request'  (duration: 203.440411ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T01:59:24.011444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.965169ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-229184-m03\" ","response":"range_response_count:1 size:2039"}
	{"level":"info","ts":"2024-08-04T01:59:24.011484Z","caller":"traceutil/trace.go:171","msg":"trace[1463434038] range","detail":"{range_begin:/registry/minions/multinode-229184-m03; range_end:; response_count:1; response_revision:594; }","duration":"205.022808ms","start":"2024-08-04T01:59:23.80645Z","end":"2024-08-04T01:59:24.011472Z","steps":["trace[1463434038] 'agreement among raft nodes before linearized reading'  (duration: 204.968511ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T01:59:24.011804Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.310046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T01:59:24.011827Z","caller":"traceutil/trace.go:171","msg":"trace[1135990474] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:595; }","duration":"205.35086ms","start":"2024-08-04T01:59:23.806469Z","end":"2024-08-04T01:59:24.01182Z","steps":["trace[1135990474] 'agreement among raft nodes before linearized reading'  (duration: 205.295293ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T01:59:27.769389Z","caller":"traceutil/trace.go:171","msg":"trace[859937074] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"133.704395ms","start":"2024-08-04T01:59:27.635663Z","end":"2024-08-04T01:59:27.769367Z","steps":["trace[859937074] 'process raft request'  (duration: 132.751509ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T02:02:39.454925Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-04T02:02:39.457981Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-229184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.183:2380"],"advertise-client-urls":["https://192.168.39.183:2379"]}
	{"level":"warn","ts":"2024-08-04T02:02:39.458182Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.183:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T02:02:39.458237Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.183:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T02:02:39.458364Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T02:02:39.458441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T02:02:39.551011Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f87838631c8138de","current-leader-member-id":"f87838631c8138de"}
	{"level":"info","ts":"2024-08-04T02:02:39.554185Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-08-04T02:02:39.554381Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-08-04T02:02:39.554429Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-229184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.183:2380"],"advertise-client-urls":["https://192.168.39.183:2379"]}
	
	
	==> etcd [c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5] <==
	{"level":"info","ts":"2024-08-04T02:04:18.192366Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T02:04:18.192373Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T02:04:18.19262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de switched to configuration voters=(17904122316942555358)"}
	{"level":"info","ts":"2024-08-04T02:04:18.192666Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dc4003dc2fbf749","local-member-id":"f87838631c8138de","added-peer-id":"f87838631c8138de","added-peer-peer-urls":["https://192.168.39.183:2380"]}
	{"level":"info","ts":"2024-08-04T02:04:18.192759Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dc4003dc2fbf749","local-member-id":"f87838631c8138de","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T02:04:18.192781Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T02:04:18.207399Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T02:04:18.233317Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f87838631c8138de","initial-advertise-peer-urls":["https://192.168.39.183:2380"],"listen-peer-urls":["https://192.168.39.183:2380"],"advertise-client-urls":["https://192.168.39.183:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.183:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T02:04:18.239315Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-08-04T02:04:18.239342Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-08-04T02:04:18.239351Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T02:04:19.376155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-04T02:04:19.3762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T02:04:19.37623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de received MsgPreVoteResp from f87838631c8138de at term 2"}
	{"level":"info","ts":"2024-08-04T02:04:19.376248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T02:04:19.376255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de received MsgVoteResp from f87838631c8138de at term 3"}
	{"level":"info","ts":"2024-08-04T02:04:19.376263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became leader at term 3"}
	{"level":"info","ts":"2024-08-04T02:04:19.376272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f87838631c8138de elected leader f87838631c8138de at term 3"}
	{"level":"info","ts":"2024-08-04T02:04:19.378132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T02:04:19.378138Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f87838631c8138de","local-member-attributes":"{Name:multinode-229184 ClientURLs:[https://192.168.39.183:2379]}","request-path":"/0/members/f87838631c8138de/attributes","cluster-id":"2dc4003dc2fbf749","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T02:04:19.37873Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T02:04:19.378925Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T02:04:19.378958Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T02:04:19.380221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T02:04:19.380605Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.183:2379"}
	
	
	==> kernel <==
	 02:06:03 up 9 min,  0 users,  load average: 0.18, 0.30, 0.19
	Linux multinode-229184 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb] <==
	I0804 02:01:58.903555       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:08.902457       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:02:08.902623       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:08.902826       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:02:08.902872       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	I0804 02:02:08.902940       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:02:08.902960       1 main.go:299] handling current node
	I0804 02:02:18.902945       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:02:18.903124       1 main.go:299] handling current node
	I0804 02:02:18.903159       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:02:18.903178       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:18.903323       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:02:18.903345       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	I0804 02:02:28.900987       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:02:28.901185       1 main.go:299] handling current node
	I0804 02:02:28.901214       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:02:28.901251       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:28.901428       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:02:28.901451       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	I0804 02:02:38.894520       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:02:38.894583       1 main.go:299] handling current node
	I0804 02:02:38.894598       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:02:38.894637       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:38.894757       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:02:38.894763       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9] <==
	I0804 02:05:18.910539       1 main.go:299] handling current node
	I0804 02:05:28.910980       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:05:28.911190       1 main.go:299] handling current node
	I0804 02:05:28.911227       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:05:28.911306       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:05:28.911531       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:05:28.911590       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	I0804 02:05:38.911754       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:05:38.911888       1 main.go:299] handling current node
	I0804 02:05:38.911919       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:05:38.912030       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:05:38.912624       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:05:38.912732       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	I0804 02:05:48.910934       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:05:48.911125       1 main.go:299] handling current node
	I0804 02:05:48.911141       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:05:48.911163       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:05:48.911469       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:05:48.911498       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.2.0/24] 
	I0804 02:05:58.916965       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:05:58.917183       1 main.go:299] handling current node
	I0804 02:05:58.917329       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:05:58.917341       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:05:58.917863       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:05:58.917972       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b] <==
	I0804 02:04:20.698253       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 02:04:20.698350       1 policy_source.go:224] refreshing policies
	I0804 02:04:20.714345       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 02:04:20.714457       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 02:04:20.714501       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 02:04:20.723548       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 02:04:20.735776       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 02:04:20.736346       1 aggregator.go:165] initial CRD sync complete...
	I0804 02:04:20.736422       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 02:04:20.736448       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 02:04:20.736471       1 cache.go:39] Caches are synced for autoregister controller
	I0804 02:04:20.749368       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 02:04:20.750190       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 02:04:20.750394       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0804 02:04:20.767891       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 02:04:20.795673       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0804 02:04:20.844191       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0804 02:04:21.620305       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 02:04:25.790743       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 02:04:25.919938       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 02:04:25.931026       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 02:04:25.999923       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 02:04:26.009760       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 02:04:33.540509       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0804 02:04:33.736162       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc] <==
	W0804 02:02:39.501861       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.501917       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.501973       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502266       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502470       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502646       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502707       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502764       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502819       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502993       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.503126       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.503384       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504016       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504784       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504879       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504939       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504995       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 02:02:39.505273       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0804 02:02:39.505309       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	W0804 02:02:39.505376       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505441       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505499       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505565       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505619       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505674       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738] <==
	I0804 02:04:34.218869       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 02:04:55.291915       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.949µs"
	I0804 02:04:57.844745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.020968ms"
	I0804 02:04:57.844889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.96µs"
	I0804 02:04:57.855145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.988277ms"
	I0804 02:04:57.855444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.853µs"
	I0804 02:04:57.860374       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.921µs"
	I0804 02:05:02.555207       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m02\" does not exist"
	I0804 02:05:02.568595       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m02" podCIDRs=["10.244.1.0/24"]
	I0804 02:05:03.453767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.322µs"
	I0804 02:05:03.468436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.151µs"
	I0804 02:05:03.480616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.952µs"
	I0804 02:05:03.527577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.808µs"
	I0804 02:05:03.537205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.826µs"
	I0804 02:05:03.542160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.855µs"
	I0804 02:05:21.303400       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:05:21.325218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="182.129µs"
	I0804 02:05:21.339534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.048µs"
	I0804 02:05:24.912637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.499203ms"
	I0804 02:05:24.912739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.301µs"
	I0804 02:05:39.643920       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:05:40.871516       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m03\" does not exist"
	I0804 02:05:40.871569       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:05:40.879268       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m03" podCIDRs=["10.244.2.0/24"]
	I0804 02:05:59.868350       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	
	
	==> kube-controller-manager [bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b] <==
	I0804 01:57:52.589191       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0804 01:58:22.587719       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m02\" does not exist"
	I0804 01:58:22.595256       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-229184-m02"
	I0804 01:58:22.604879       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m02" podCIDRs=["10.244.1.0/24"]
	I0804 01:58:43.328748       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 01:58:45.690480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.313165ms"
	I0804 01:58:45.715949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.334ms"
	I0804 01:58:45.716029       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.721µs"
	I0804 01:58:49.250832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.256537ms"
	I0804 01:58:49.250931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.021µs"
	I0804 01:58:49.585712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.493314ms"
	I0804 01:58:49.586251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.808µs"
	I0804 01:59:23.796685       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m03\" does not exist"
	I0804 01:59:23.796768       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 01:59:24.029629       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m03" podCIDRs=["10.244.2.0/24"]
	I0804 01:59:27.620028       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-229184-m03"
	I0804 01:59:43.446385       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m03"
	I0804 02:00:12.970277       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:00:14.004944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:00:14.005595       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m03\" does not exist"
	I0804 02:00:14.016835       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m03" podCIDRs=["10.244.3.0/24"]
	I0804 02:00:33.686469       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:01:17.678706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:01:17.734206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.604098ms"
	I0804 02:01:17.734397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.523µs"
	
	
	==> kube-proxy [3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6] <==
	I0804 01:57:34.492134       1 server_linux.go:69] "Using iptables proxy"
	I0804 01:57:34.507984       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	I0804 01:57:34.554644       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 01:57:34.554727       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 01:57:34.554747       1 server_linux.go:165] "Using iptables Proxier"
	I0804 01:57:34.559399       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 01:57:34.559949       1 server.go:872] "Version info" version="v1.30.3"
	I0804 01:57:34.560222       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:57:34.561885       1 config.go:192] "Starting service config controller"
	I0804 01:57:34.563657       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 01:57:34.562151       1 config.go:319] "Starting node config controller"
	I0804 01:57:34.564735       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 01:57:34.563470       1 config.go:101] "Starting endpoint slice config controller"
	I0804 01:57:34.564838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 01:57:34.665132       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 01:57:34.665192       1 shared_informer.go:320] Caches are synced for node config
	I0804 01:57:34.665149       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175] <==
	I0804 02:04:19.194499       1 server_linux.go:69] "Using iptables proxy"
	I0804 02:04:20.786638       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	I0804 02:04:20.922176       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 02:04:20.922259       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 02:04:20.922296       1 server_linux.go:165] "Using iptables Proxier"
	I0804 02:04:20.927001       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 02:04:20.927347       1 server.go:872] "Version info" version="v1.30.3"
	I0804 02:04:20.927389       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 02:04:20.929579       1 config.go:192] "Starting service config controller"
	I0804 02:04:20.929628       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 02:04:20.929666       1 config.go:101] "Starting endpoint slice config controller"
	I0804 02:04:20.929671       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 02:04:20.932701       1 config.go:319] "Starting node config controller"
	I0804 02:04:20.932737       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 02:04:21.030748       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 02:04:21.030846       1 shared_informer.go:320] Caches are synced for service config
	I0804 02:04:21.033494       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf] <==
	I0804 02:04:18.662883       1 serving.go:380] Generated self-signed cert in-memory
	W0804 02:04:20.657131       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 02:04:20.657520       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 02:04:20.657616       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 02:04:20.657641       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 02:04:20.767736       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 02:04:20.768659       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 02:04:20.775946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 02:04:20.776185       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 02:04:20.779412       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 02:04:20.776205       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 02:04:20.879702       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e] <==
	E0804 01:57:18.124644       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 01:57:18.138930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0804 01:57:18.138980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0804 01:57:18.238680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 01:57:18.238732       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 01:57:18.249001       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 01:57:18.249213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0804 01:57:18.279185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 01:57:18.279684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0804 01:57:18.304311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0804 01:57:18.304358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0804 01:57:18.413774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0804 01:57:18.413881       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0804 01:57:18.428486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0804 01:57:18.428542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0804 01:57:18.432278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 01:57:18.432409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0804 01:57:18.512975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 01:57:18.513126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0804 01:57:18.527856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0804 01:57:18.528129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0804 01:57:18.606693       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 01:57:18.606725       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0804 01:57:21.013971       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0804 02:02:39.465927       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 02:04:25 multinode-229184 kubelet[3846]: I0804 02:04:25.451331    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df09d9431e5f7bf804b7cbd24a37d103-k8s-certs\") pod \"kube-controller-manager-multinode-229184\" (UID: \"df09d9431e5f7bf804b7cbd24a37d103\") " pod="kube-system/kube-controller-manager-multinode-229184"
	Aug 04 02:04:25 multinode-229184 kubelet[3846]: I0804 02:04:25.451344    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df09d9431e5f7bf804b7cbd24a37d103-kubeconfig\") pod \"kube-controller-manager-multinode-229184\" (UID: \"df09d9431e5f7bf804b7cbd24a37d103\") " pod="kube-system/kube-controller-manager-multinode-229184"
	Aug 04 02:04:25 multinode-229184 kubelet[3846]: I0804 02:04:25.451363    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df09d9431e5f7bf804b7cbd24a37d103-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-229184\" (UID: \"df09d9431e5f7bf804b7cbd24a37d103\") " pod="kube-system/kube-controller-manager-multinode-229184"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.159566    3846 apiserver.go:52] "Watching apiserver"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.162784    3846 topology_manager.go:215] "Topology Admit Handler" podUID="263f3468-8f44-46ac-adc1-3daab3d99200" podNamespace="kube-system" podName="kindnet-85878"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.162912    3846 topology_manager.go:215] "Topology Admit Handler" podUID="cf584da9-583d-4aeb-9543-47388a20b06d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s8kfn"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.163010    3846 topology_manager.go:215] "Topology Admit Handler" podUID="92c92b5d-bd0b-41d0-810e-66e7a4d0097e" podNamespace="kube-system" podName="kube-proxy-cnd2r"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.163121    3846 topology_manager.go:215] "Topology Admit Handler" podUID="14a14d46-fda3-41ed-9ef2-d2a54615cc0e" podNamespace="kube-system" podName="storage-provisioner"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.163162    3846 topology_manager.go:215] "Topology Admit Handler" podUID="88dc5b8c-6f06-4bf4-b8e9-9388b4018a10" podNamespace="default" podName="busybox-fc5497c4f-jq4l7"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.228763    3846 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.256885    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92c92b5d-bd0b-41d0-810e-66e7a4d0097e-xtables-lock\") pod \"kube-proxy-cnd2r\" (UID: \"92c92b5d-bd0b-41d0-810e-66e7a4d0097e\") " pod="kube-system/kube-proxy-cnd2r"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.257410    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92c92b5d-bd0b-41d0-810e-66e7a4d0097e-lib-modules\") pod \"kube-proxy-cnd2r\" (UID: \"92c92b5d-bd0b-41d0-810e-66e7a4d0097e\") " pod="kube-system/kube-proxy-cnd2r"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.257564    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/263f3468-8f44-46ac-adc1-3daab3d99200-lib-modules\") pod \"kindnet-85878\" (UID: \"263f3468-8f44-46ac-adc1-3daab3d99200\") " pod="kube-system/kindnet-85878"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.257977    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/14a14d46-fda3-41ed-9ef2-d2a54615cc0e-tmp\") pod \"storage-provisioner\" (UID: \"14a14d46-fda3-41ed-9ef2-d2a54615cc0e\") " pod="kube-system/storage-provisioner"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.258013    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/263f3468-8f44-46ac-adc1-3daab3d99200-xtables-lock\") pod \"kindnet-85878\" (UID: \"263f3468-8f44-46ac-adc1-3daab3d99200\") " pod="kube-system/kindnet-85878"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.258084    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/263f3468-8f44-46ac-adc1-3daab3d99200-cni-cfg\") pod \"kindnet-85878\" (UID: \"263f3468-8f44-46ac-adc1-3daab3d99200\") " pod="kube-system/kindnet-85878"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: E0804 02:04:26.440029    3846 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-229184\" already exists" pod="kube-system/kube-controller-manager-multinode-229184"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: E0804 02:04:26.441240    3846 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-229184\" already exists" pod="kube-system/kube-apiserver-multinode-229184"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.464469    3846 scope.go:117] "RemoveContainer" containerID="b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c"
	Aug 04 02:04:33 multinode-229184 kubelet[3846]: I0804 02:04:33.394955    3846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 04 02:05:25 multinode-229184 kubelet[3846]: E0804 02:05:25.306269    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 02:05:25 multinode-229184 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 02:05:25 multinode-229184 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 02:05:25 multinode-229184 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 02:05:25 multinode-229184 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 02:06:02.467482  131879 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19364-90243/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-229184 -n multinode-229184
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-229184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 stop
E0804 02:06:42.266114   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-229184 stop: exit status 82 (2m0.475360154s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-229184-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-229184 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-229184 status: exit status 3 (18.748154184s)

                                                
                                                
-- stdout --
	multinode-229184
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-229184-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 02:08:25.889698  132539 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host
	E0804 02:08:25.889732  132539 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.130:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-229184 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-229184 -n multinode-229184
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-229184 logs -n 25: (1.483696088s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m02:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184:/home/docker/cp-test_multinode-229184-m02_multinode-229184.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n multinode-229184 sudo cat                                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /home/docker/cp-test_multinode-229184-m02_multinode-229184.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m02:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03:/home/docker/cp-test_multinode-229184-m02_multinode-229184-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n multinode-229184-m03 sudo cat                                   | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /home/docker/cp-test_multinode-229184-m02_multinode-229184-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp testdata/cp-test.txt                                                | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m03:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3996378525/001/cp-test_multinode-229184-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m03:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184:/home/docker/cp-test_multinode-229184-m03_multinode-229184.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n multinode-229184 sudo cat                                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /home/docker/cp-test_multinode-229184-m03_multinode-229184.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-229184 cp multinode-229184-m03:/home/docker/cp-test.txt                       | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m02:/home/docker/cp-test_multinode-229184-m03_multinode-229184-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n                                                                 | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | multinode-229184-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229184 ssh -n multinode-229184-m02 sudo cat                                   | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	|         | /home/docker/cp-test_multinode-229184-m03_multinode-229184-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-229184 node stop m03                                                          | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 01:59 UTC |
	| node    | multinode-229184 node start                                                             | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 01:59 UTC | 04 Aug 24 02:00 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-229184                                                                | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:00 UTC |                     |
	| stop    | -p multinode-229184                                                                     | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:00 UTC |                     |
	| start   | -p multinode-229184                                                                     | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:02 UTC | 04 Aug 24 02:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-229184                                                                | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:06 UTC |                     |
	| node    | multinode-229184 node delete                                                            | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:06 UTC | 04 Aug 24 02:06 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-229184 stop                                                                   | multinode-229184 | jenkins | v1.33.1 | 04 Aug 24 02:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 02:02:38
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 02:02:38.440729  130743 out.go:291] Setting OutFile to fd 1 ...
	I0804 02:02:38.441001  130743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:02:38.441012  130743 out.go:304] Setting ErrFile to fd 2...
	I0804 02:02:38.441016  130743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:02:38.441180  130743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 02:02:38.441762  130743 out.go:298] Setting JSON to false
	I0804 02:02:38.442661  130743 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":13502,"bootTime":1722723456,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 02:02:38.442724  130743 start.go:139] virtualization: kvm guest
	I0804 02:02:38.446318  130743 out.go:177] * [multinode-229184] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 02:02:38.447610  130743 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 02:02:38.447636  130743 notify.go:220] Checking for updates...
	I0804 02:02:38.450463  130743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 02:02:38.451994  130743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 02:02:38.453436  130743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:02:38.454665  130743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 02:02:38.456019  130743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 02:02:38.457634  130743 config.go:182] Loaded profile config "multinode-229184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 02:02:38.457731  130743 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 02:02:38.458250  130743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 02:02:38.458311  130743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:02:38.473574  130743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36945
	I0804 02:02:38.474079  130743 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:02:38.474733  130743 main.go:141] libmachine: Using API Version  1
	I0804 02:02:38.474753  130743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:02:38.475145  130743 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:02:38.475301  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:02:38.510250  130743 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 02:02:38.511511  130743 start.go:297] selected driver: kvm2
	I0804 02:02:38.511526  130743 start.go:901] validating driver "kvm2" against &{Name:multinode-229184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-229184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.152 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:02:38.511822  130743 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 02:02:38.512279  130743 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:02:38.512361  130743 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 02:02:38.527528  130743 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 02:02:38.528284  130743 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 02:02:38.528364  130743 cni.go:84] Creating CNI manager for ""
	I0804 02:02:38.528380  130743 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0804 02:02:38.528452  130743 start.go:340] cluster config:
	{Name:multinode-229184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-229184 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.152 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:02:38.528617  130743 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:02:38.530404  130743 out.go:177] * Starting "multinode-229184" primary control-plane node in "multinode-229184" cluster
	I0804 02:02:38.531650  130743 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 02:02:38.531700  130743 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 02:02:38.531713  130743 cache.go:56] Caching tarball of preloaded images
	I0804 02:02:38.531796  130743 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 02:02:38.531809  130743 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 02:02:38.531947  130743 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/config.json ...
	I0804 02:02:38.532163  130743 start.go:360] acquireMachinesLock for multinode-229184: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 02:02:38.532216  130743 start.go:364] duration metric: took 30.567µs to acquireMachinesLock for "multinode-229184"
	I0804 02:02:38.532237  130743 start.go:96] Skipping create...Using existing machine configuration
	I0804 02:02:38.532248  130743 fix.go:54] fixHost starting: 
	I0804 02:02:38.532508  130743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 02:02:38.532547  130743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:02:38.546901  130743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36319
	I0804 02:02:38.547343  130743 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:02:38.547809  130743 main.go:141] libmachine: Using API Version  1
	I0804 02:02:38.547831  130743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:02:38.548211  130743 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:02:38.548411  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:02:38.548572  130743 main.go:141] libmachine: (multinode-229184) Calling .GetState
	I0804 02:02:38.550155  130743 fix.go:112] recreateIfNeeded on multinode-229184: state=Running err=<nil>
	W0804 02:02:38.550179  130743 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 02:02:38.552182  130743 out.go:177] * Updating the running kvm2 "multinode-229184" VM ...
	I0804 02:02:38.553471  130743 machine.go:94] provisionDockerMachine start ...
	I0804 02:02:38.553491  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:02:38.553685  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:38.556296  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.556750  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.556773  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.556897  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:38.557116  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.557298  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.557446  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:38.557574  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:02:38.557797  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:02:38.557812  130743 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 02:02:38.674715  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-229184
	
	I0804 02:02:38.674749  130743 main.go:141] libmachine: (multinode-229184) Calling .GetMachineName
	I0804 02:02:38.675069  130743 buildroot.go:166] provisioning hostname "multinode-229184"
	I0804 02:02:38.675099  130743 main.go:141] libmachine: (multinode-229184) Calling .GetMachineName
	I0804 02:02:38.675337  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:38.677938  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.678346  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.678379  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.678497  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:38.678675  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.678802  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.678967  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:38.679141  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:02:38.679300  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:02:38.679312  130743 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-229184 && echo "multinode-229184" | sudo tee /etc/hostname
	I0804 02:02:38.810869  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-229184
	
	I0804 02:02:38.810904  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:38.813825  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.814116  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.814147  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.814308  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:38.814508  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.814693  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:38.814814  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:38.814979  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:02:38.815209  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:02:38.815227  130743 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-229184' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-229184/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-229184' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 02:02:38.926410  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 02:02:38.926447  130743 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 02:02:38.926479  130743 buildroot.go:174] setting up certificates
	I0804 02:02:38.926491  130743 provision.go:84] configureAuth start
	I0804 02:02:38.926501  130743 main.go:141] libmachine: (multinode-229184) Calling .GetMachineName
	I0804 02:02:38.926790  130743 main.go:141] libmachine: (multinode-229184) Calling .GetIP
	I0804 02:02:38.929285  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.929641  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.929674  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.929803  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:38.932086  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.932416  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:38.932444  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:38.932579  130743 provision.go:143] copyHostCerts
	I0804 02:02:38.932617  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 02:02:38.932654  130743 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 02:02:38.932664  130743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 02:02:38.932760  130743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 02:02:38.932935  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 02:02:38.932972  130743 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 02:02:38.932982  130743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 02:02:38.933023  130743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 02:02:38.933108  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 02:02:38.933132  130743 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 02:02:38.933149  130743 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 02:02:38.933183  130743 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 02:02:38.933265  130743 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.multinode-229184 san=[127.0.0.1 192.168.39.183 localhost minikube multinode-229184]
	I0804 02:02:39.149731  130743 provision.go:177] copyRemoteCerts
	I0804 02:02:39.149789  130743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 02:02:39.149829  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:39.152335  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:39.152616  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:39.152663  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:39.152798  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:39.152998  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:39.153169  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:39.153299  130743 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184/id_rsa Username:docker}
	I0804 02:02:39.240704  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0804 02:02:39.240791  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 02:02:39.267141  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0804 02:02:39.267221  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0804 02:02:39.292542  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0804 02:02:39.292631  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 02:02:39.319283  130743 provision.go:87] duration metric: took 392.775488ms to configureAuth
	I0804 02:02:39.319317  130743 buildroot.go:189] setting minikube options for container-runtime
	I0804 02:02:39.319591  130743 config.go:182] Loaded profile config "multinode-229184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 02:02:39.319683  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:02:39.322292  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:39.322602  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:02:39.322634  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:02:39.322749  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:02:39.322948  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:39.323128  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:02:39.323277  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:02:39.323443  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:02:39.323596  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:02:39.323609  130743 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 02:04:10.015183  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 02:04:10.015255  130743 machine.go:97] duration metric: took 1m31.461764886s to provisionDockerMachine
	I0804 02:04:10.015270  130743 start.go:293] postStartSetup for "multinode-229184" (driver="kvm2")
	I0804 02:04:10.015281  130743 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 02:04:10.015304  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.015625  130743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 02:04:10.015660  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:04:10.018822  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.019543  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.019574  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.019734  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:04:10.019930  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.020122  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:04:10.020293  130743 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184/id_rsa Username:docker}
	I0804 02:04:10.109752  130743 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 02:04:10.114303  130743 command_runner.go:130] > NAME=Buildroot
	I0804 02:04:10.114367  130743 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0804 02:04:10.114380  130743 command_runner.go:130] > ID=buildroot
	I0804 02:04:10.114388  130743 command_runner.go:130] > VERSION_ID=2023.02.9
	I0804 02:04:10.114401  130743 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0804 02:04:10.114490  130743 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 02:04:10.114517  130743 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 02:04:10.114594  130743 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 02:04:10.114672  130743 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 02:04:10.114684  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /etc/ssl/certs/974072.pem
	I0804 02:04:10.114781  130743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 02:04:10.124428  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 02:04:10.150749  130743 start.go:296] duration metric: took 135.461951ms for postStartSetup
	I0804 02:04:10.150812  130743 fix.go:56] duration metric: took 1m31.618564434s for fixHost
	I0804 02:04:10.150857  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:04:10.153442  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.153877  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.153907  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.154060  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:04:10.154278  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.154460  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.154594  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:04:10.154746  130743 main.go:141] libmachine: Using SSH client type: native
	I0804 02:04:10.154914  130743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I0804 02:04:10.154923  130743 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0804 02:04:10.266695  130743 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722737050.243907130
	
	I0804 02:04:10.266723  130743 fix.go:216] guest clock: 1722737050.243907130
	I0804 02:04:10.266732  130743 fix.go:229] Guest: 2024-08-04 02:04:10.24390713 +0000 UTC Remote: 2024-08-04 02:04:10.150835405 +0000 UTC m=+91.746146853 (delta=93.071725ms)
	I0804 02:04:10.266777  130743 fix.go:200] guest clock delta is within tolerance: 93.071725ms
	I0804 02:04:10.266793  130743 start.go:83] releasing machines lock for "multinode-229184", held for 1m31.734564683s
	I0804 02:04:10.266825  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.267110  130743 main.go:141] libmachine: (multinode-229184) Calling .GetIP
	I0804 02:04:10.269639  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.270034  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.270077  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.270225  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.270822  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.271028  130743 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 02:04:10.271146  130743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 02:04:10.271199  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:04:10.271344  130743 ssh_runner.go:195] Run: cat /version.json
	I0804 02:04:10.271374  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 02:04:10.274283  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.274604  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.274642  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.274698  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.274820  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:04:10.275008  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.275136  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:04:10.275200  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:10.275228  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:10.275241  130743 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184/id_rsa Username:docker}
	I0804 02:04:10.275402  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 02:04:10.275572  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 02:04:10.275732  130743 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 02:04:10.275908  130743 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184/id_rsa Username:docker}
	I0804 02:04:10.374368  130743 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0804 02:04:10.374532  130743 ssh_runner.go:195] Run: systemctl --version
	I0804 02:04:10.398040  130743 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0804 02:04:10.398739  130743 command_runner.go:130] > systemd 252 (252)
	I0804 02:04:10.398763  130743 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0804 02:04:10.398825  130743 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 02:04:10.559561  130743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0804 02:04:10.566772  130743 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0804 02:04:10.566887  130743 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 02:04:10.566966  130743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 02:04:10.578199  130743 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 02:04:10.578236  130743 start.go:495] detecting cgroup driver to use...
	I0804 02:04:10.578308  130743 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 02:04:10.595136  130743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 02:04:10.609618  130743 docker.go:217] disabling cri-docker service (if available) ...
	I0804 02:04:10.609689  130743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 02:04:10.623234  130743 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 02:04:10.637203  130743 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 02:04:10.787139  130743 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 02:04:10.931772  130743 docker.go:233] disabling docker service ...
	I0804 02:04:10.931860  130743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 02:04:10.948362  130743 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 02:04:10.962949  130743 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 02:04:11.105335  130743 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 02:04:11.249895  130743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 02:04:11.264535  130743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 02:04:11.284432  130743 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0804 02:04:11.284905  130743 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 02:04:11.284962  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.295953  130743 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 02:04:11.296025  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.307235  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.317874  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.328444  130743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 02:04:11.339731  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.350722  130743 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.362760  130743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:04:11.373547  130743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 02:04:11.383335  130743 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0804 02:04:11.383428  130743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 02:04:11.392776  130743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 02:04:11.534752  130743 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 02:04:12.412281  130743 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 02:04:12.412374  130743 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 02:04:12.417117  130743 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0804 02:04:12.417139  130743 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0804 02:04:12.417146  130743 command_runner.go:130] > Device: 0,22	Inode: 1335        Links: 1
	I0804 02:04:12.417152  130743 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 02:04:12.417157  130743 command_runner.go:130] > Access: 2024-08-04 02:04:12.277264361 +0000
	I0804 02:04:12.417163  130743 command_runner.go:130] > Modify: 2024-08-04 02:04:12.277264361 +0000
	I0804 02:04:12.417168  130743 command_runner.go:130] > Change: 2024-08-04 02:04:12.277264361 +0000
	I0804 02:04:12.417172  130743 command_runner.go:130] >  Birth: -
	I0804 02:04:12.417286  130743 start.go:563] Will wait 60s for crictl version
	I0804 02:04:12.417331  130743 ssh_runner.go:195] Run: which crictl
	I0804 02:04:12.421040  130743 command_runner.go:130] > /usr/bin/crictl
	I0804 02:04:12.421104  130743 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 02:04:12.460331  130743 command_runner.go:130] > Version:  0.1.0
	I0804 02:04:12.460360  130743 command_runner.go:130] > RuntimeName:  cri-o
	I0804 02:04:12.460367  130743 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0804 02:04:12.460376  130743 command_runner.go:130] > RuntimeApiVersion:  v1
	I0804 02:04:12.460395  130743 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 02:04:12.460478  130743 ssh_runner.go:195] Run: crio --version
	I0804 02:04:12.488467  130743 command_runner.go:130] > crio version 1.29.1
	I0804 02:04:12.488490  130743 command_runner.go:130] > Version:        1.29.1
	I0804 02:04:12.488496  130743 command_runner.go:130] > GitCommit:      unknown
	I0804 02:04:12.488501  130743 command_runner.go:130] > GitCommitDate:  unknown
	I0804 02:04:12.488505  130743 command_runner.go:130] > GitTreeState:   clean
	I0804 02:04:12.488511  130743 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0804 02:04:12.488515  130743 command_runner.go:130] > GoVersion:      go1.21.6
	I0804 02:04:12.488519  130743 command_runner.go:130] > Compiler:       gc
	I0804 02:04:12.488524  130743 command_runner.go:130] > Platform:       linux/amd64
	I0804 02:04:12.488528  130743 command_runner.go:130] > Linkmode:       dynamic
	I0804 02:04:12.488532  130743 command_runner.go:130] > BuildTags:      
	I0804 02:04:12.488537  130743 command_runner.go:130] >   containers_image_ostree_stub
	I0804 02:04:12.488541  130743 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0804 02:04:12.488544  130743 command_runner.go:130] >   btrfs_noversion
	I0804 02:04:12.488548  130743 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0804 02:04:12.488553  130743 command_runner.go:130] >   libdm_no_deferred_remove
	I0804 02:04:12.488559  130743 command_runner.go:130] >   seccomp
	I0804 02:04:12.488563  130743 command_runner.go:130] > LDFlags:          unknown
	I0804 02:04:12.488568  130743 command_runner.go:130] > SeccompEnabled:   true
	I0804 02:04:12.488572  130743 command_runner.go:130] > AppArmorEnabled:  false
	I0804 02:04:12.489733  130743 ssh_runner.go:195] Run: crio --version
	I0804 02:04:12.519326  130743 command_runner.go:130] > crio version 1.29.1
	I0804 02:04:12.519355  130743 command_runner.go:130] > Version:        1.29.1
	I0804 02:04:12.519364  130743 command_runner.go:130] > GitCommit:      unknown
	I0804 02:04:12.519387  130743 command_runner.go:130] > GitCommitDate:  unknown
	I0804 02:04:12.519398  130743 command_runner.go:130] > GitTreeState:   clean
	I0804 02:04:12.519406  130743 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0804 02:04:12.519412  130743 command_runner.go:130] > GoVersion:      go1.21.6
	I0804 02:04:12.519420  130743 command_runner.go:130] > Compiler:       gc
	I0804 02:04:12.519428  130743 command_runner.go:130] > Platform:       linux/amd64
	I0804 02:04:12.519435  130743 command_runner.go:130] > Linkmode:       dynamic
	I0804 02:04:12.519442  130743 command_runner.go:130] > BuildTags:      
	I0804 02:04:12.519450  130743 command_runner.go:130] >   containers_image_ostree_stub
	I0804 02:04:12.519458  130743 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0804 02:04:12.519477  130743 command_runner.go:130] >   btrfs_noversion
	I0804 02:04:12.519485  130743 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0804 02:04:12.519489  130743 command_runner.go:130] >   libdm_no_deferred_remove
	I0804 02:04:12.519493  130743 command_runner.go:130] >   seccomp
	I0804 02:04:12.519497  130743 command_runner.go:130] > LDFlags:          unknown
	I0804 02:04:12.519501  130743 command_runner.go:130] > SeccompEnabled:   true
	I0804 02:04:12.519505  130743 command_runner.go:130] > AppArmorEnabled:  false
	I0804 02:04:12.522218  130743 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 02:04:12.523717  130743 main.go:141] libmachine: (multinode-229184) Calling .GetIP
	I0804 02:04:12.526308  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:12.526700  130743 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 02:04:12.526731  130743 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 02:04:12.526931  130743 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0804 02:04:12.531283  130743 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0804 02:04:12.531404  130743 kubeadm.go:883] updating cluster {Name:multinode-229184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-229184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.152 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 02:04:12.531546  130743 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 02:04:12.531598  130743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 02:04:12.574736  130743 command_runner.go:130] > {
	I0804 02:04:12.574767  130743 command_runner.go:130] >   "images": [
	I0804 02:04:12.574773  130743 command_runner.go:130] >     {
	I0804 02:04:12.574786  130743 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0804 02:04:12.574793  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.574803  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0804 02:04:12.574809  130743 command_runner.go:130] >       ],
	I0804 02:04:12.574816  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.574830  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0804 02:04:12.574841  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0804 02:04:12.574847  130743 command_runner.go:130] >       ],
	I0804 02:04:12.574855  130743 command_runner.go:130] >       "size": "87165492",
	I0804 02:04:12.574861  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.574868  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.574878  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.574886  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.574895  130743 command_runner.go:130] >     },
	I0804 02:04:12.574901  130743 command_runner.go:130] >     {
	I0804 02:04:12.574913  130743 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0804 02:04:12.574922  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.574933  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0804 02:04:12.574939  130743 command_runner.go:130] >       ],
	I0804 02:04:12.574949  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.574961  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0804 02:04:12.574985  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0804 02:04:12.574995  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575002  130743 command_runner.go:130] >       "size": "87174707",
	I0804 02:04:12.575008  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.575024  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575035  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575044  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575053  130743 command_runner.go:130] >     },
	I0804 02:04:12.575061  130743 command_runner.go:130] >     {
	I0804 02:04:12.575072  130743 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0804 02:04:12.575080  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575091  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0804 02:04:12.575098  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575105  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575115  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0804 02:04:12.575127  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0804 02:04:12.575134  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575141  130743 command_runner.go:130] >       "size": "1363676",
	I0804 02:04:12.575149  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.575165  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575173  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575182  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575189  130743 command_runner.go:130] >     },
	I0804 02:04:12.575194  130743 command_runner.go:130] >     {
	I0804 02:04:12.575206  130743 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0804 02:04:12.575217  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575227  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0804 02:04:12.575235  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575243  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575259  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0804 02:04:12.575287  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0804 02:04:12.575296  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575304  130743 command_runner.go:130] >       "size": "31470524",
	I0804 02:04:12.575312  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.575320  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575328  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575342  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575350  130743 command_runner.go:130] >     },
	I0804 02:04:12.575357  130743 command_runner.go:130] >     {
	I0804 02:04:12.575369  130743 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0804 02:04:12.575377  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575385  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0804 02:04:12.575393  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575399  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575412  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0804 02:04:12.575425  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0804 02:04:12.575432  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575438  130743 command_runner.go:130] >       "size": "61245718",
	I0804 02:04:12.575446  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.575456  130743 command_runner.go:130] >       "username": "nonroot",
	I0804 02:04:12.575465  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575471  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575479  130743 command_runner.go:130] >     },
	I0804 02:04:12.575486  130743 command_runner.go:130] >     {
	I0804 02:04:12.575496  130743 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0804 02:04:12.575506  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575514  130743 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0804 02:04:12.575521  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575527  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575539  130743 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0804 02:04:12.575554  130743 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0804 02:04:12.575562  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575569  130743 command_runner.go:130] >       "size": "150779692",
	I0804 02:04:12.575579  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.575588  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.575596  130743 command_runner.go:130] >       },
	I0804 02:04:12.575606  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575615  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575624  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575633  130743 command_runner.go:130] >     },
	I0804 02:04:12.575640  130743 command_runner.go:130] >     {
	I0804 02:04:12.575650  130743 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0804 02:04:12.575667  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575679  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0804 02:04:12.575686  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575695  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575708  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0804 02:04:12.575721  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0804 02:04:12.575731  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575738  130743 command_runner.go:130] >       "size": "117609954",
	I0804 02:04:12.575747  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.575757  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.575765  130743 command_runner.go:130] >       },
	I0804 02:04:12.575771  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575779  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575787  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575792  130743 command_runner.go:130] >     },
	I0804 02:04:12.575800  130743 command_runner.go:130] >     {
	I0804 02:04:12.575808  130743 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0804 02:04:12.575814  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575824  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0804 02:04:12.575832  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575839  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575867  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0804 02:04:12.575880  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0804 02:04:12.575888  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575896  130743 command_runner.go:130] >       "size": "112198984",
	I0804 02:04:12.575905  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.575913  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.575922  130743 command_runner.go:130] >       },
	I0804 02:04:12.575930  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.575935  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.575941  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.575945  130743 command_runner.go:130] >     },
	I0804 02:04:12.575950  130743 command_runner.go:130] >     {
	I0804 02:04:12.575958  130743 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0804 02:04:12.575963  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.575969  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0804 02:04:12.575976  130743 command_runner.go:130] >       ],
	I0804 02:04:12.575985  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.575998  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0804 02:04:12.576012  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0804 02:04:12.576020  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576027  130743 command_runner.go:130] >       "size": "85953945",
	I0804 02:04:12.576037  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.576046  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.576055  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.576064  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.576068  130743 command_runner.go:130] >     },
	I0804 02:04:12.576075  130743 command_runner.go:130] >     {
	I0804 02:04:12.576082  130743 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0804 02:04:12.576088  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.576093  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0804 02:04:12.576102  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576108  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.576116  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0804 02:04:12.576133  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0804 02:04:12.576140  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576145  130743 command_runner.go:130] >       "size": "63051080",
	I0804 02:04:12.576156  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.576163  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.576166  130743 command_runner.go:130] >       },
	I0804 02:04:12.576170  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.576176  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.576182  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.576190  130743 command_runner.go:130] >     },
	I0804 02:04:12.576196  130743 command_runner.go:130] >     {
	I0804 02:04:12.576206  130743 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0804 02:04:12.576216  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.576224  130743 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0804 02:04:12.576232  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576239  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.576253  130743 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0804 02:04:12.576267  130743 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0804 02:04:12.576278  130743 command_runner.go:130] >       ],
	I0804 02:04:12.576284  130743 command_runner.go:130] >       "size": "750414",
	I0804 02:04:12.576293  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.576300  130743 command_runner.go:130] >         "value": "65535"
	I0804 02:04:12.576308  130743 command_runner.go:130] >       },
	I0804 02:04:12.576314  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.576323  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.576330  130743 command_runner.go:130] >       "pinned": true
	I0804 02:04:12.576338  130743 command_runner.go:130] >     }
	I0804 02:04:12.576344  130743 command_runner.go:130] >   ]
	I0804 02:04:12.576351  130743 command_runner.go:130] > }
	I0804 02:04:12.576610  130743 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 02:04:12.576628  130743 crio.go:433] Images already preloaded, skipping extraction
	I0804 02:04:12.576685  130743 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 02:04:12.610502  130743 command_runner.go:130] > {
	I0804 02:04:12.610535  130743 command_runner.go:130] >   "images": [
	I0804 02:04:12.610542  130743 command_runner.go:130] >     {
	I0804 02:04:12.610554  130743 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0804 02:04:12.610561  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610570  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0804 02:04:12.610576  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610583  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610608  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0804 02:04:12.610620  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0804 02:04:12.610626  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610634  130743 command_runner.go:130] >       "size": "87165492",
	I0804 02:04:12.610641  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.610646  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.610654  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.610659  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.610663  130743 command_runner.go:130] >     },
	I0804 02:04:12.610666  130743 command_runner.go:130] >     {
	I0804 02:04:12.610672  130743 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0804 02:04:12.610677  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610682  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0804 02:04:12.610689  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610692  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610702  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0804 02:04:12.610709  130743 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0804 02:04:12.610715  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610719  130743 command_runner.go:130] >       "size": "87174707",
	I0804 02:04:12.610723  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.610729  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.610733  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.610737  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.610743  130743 command_runner.go:130] >     },
	I0804 02:04:12.610746  130743 command_runner.go:130] >     {
	I0804 02:04:12.610751  130743 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0804 02:04:12.610757  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610763  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0804 02:04:12.610768  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610774  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610780  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0804 02:04:12.610789  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0804 02:04:12.610793  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610796  130743 command_runner.go:130] >       "size": "1363676",
	I0804 02:04:12.610801  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.610805  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.610811  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.610815  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.610821  130743 command_runner.go:130] >     },
	I0804 02:04:12.610824  130743 command_runner.go:130] >     {
	I0804 02:04:12.610831  130743 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0804 02:04:12.610837  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610842  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0804 02:04:12.610848  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610851  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610864  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0804 02:04:12.610878  130743 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0804 02:04:12.610884  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610888  130743 command_runner.go:130] >       "size": "31470524",
	I0804 02:04:12.610894  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.610898  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.610905  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.610909  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.610915  130743 command_runner.go:130] >     },
	I0804 02:04:12.610918  130743 command_runner.go:130] >     {
	I0804 02:04:12.610927  130743 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0804 02:04:12.610931  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.610942  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0804 02:04:12.610947  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610956  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.610967  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0804 02:04:12.610981  130743 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0804 02:04:12.610989  130743 command_runner.go:130] >       ],
	I0804 02:04:12.610993  130743 command_runner.go:130] >       "size": "61245718",
	I0804 02:04:12.610999  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.611005  130743 command_runner.go:130] >       "username": "nonroot",
	I0804 02:04:12.611011  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611015  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611021  130743 command_runner.go:130] >     },
	I0804 02:04:12.611024  130743 command_runner.go:130] >     {
	I0804 02:04:12.611032  130743 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0804 02:04:12.611036  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611041  130743 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0804 02:04:12.611047  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611055  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611062  130743 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0804 02:04:12.611071  130743 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0804 02:04:12.611077  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611081  130743 command_runner.go:130] >       "size": "150779692",
	I0804 02:04:12.611087  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611091  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.611097  130743 command_runner.go:130] >       },
	I0804 02:04:12.611101  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611115  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611121  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611124  130743 command_runner.go:130] >     },
	I0804 02:04:12.611130  130743 command_runner.go:130] >     {
	I0804 02:04:12.611136  130743 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0804 02:04:12.611140  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611145  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0804 02:04:12.611150  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611156  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611167  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0804 02:04:12.611177  130743 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0804 02:04:12.611183  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611187  130743 command_runner.go:130] >       "size": "117609954",
	I0804 02:04:12.611193  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611197  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.611203  130743 command_runner.go:130] >       },
	I0804 02:04:12.611208  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611213  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611218  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611222  130743 command_runner.go:130] >     },
	I0804 02:04:12.611226  130743 command_runner.go:130] >     {
	I0804 02:04:12.611237  130743 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0804 02:04:12.611246  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611257  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0804 02:04:12.611266  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611272  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611297  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0804 02:04:12.611313  130743 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0804 02:04:12.611319  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611324  130743 command_runner.go:130] >       "size": "112198984",
	I0804 02:04:12.611330  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611336  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.611341  130743 command_runner.go:130] >       },
	I0804 02:04:12.611347  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611352  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611358  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611363  130743 command_runner.go:130] >     },
	I0804 02:04:12.611368  130743 command_runner.go:130] >     {
	I0804 02:04:12.611380  130743 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0804 02:04:12.611389  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611398  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0804 02:04:12.611404  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611414  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611428  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0804 02:04:12.611441  130743 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0804 02:04:12.611449  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611455  130743 command_runner.go:130] >       "size": "85953945",
	I0804 02:04:12.611464  130743 command_runner.go:130] >       "uid": null,
	I0804 02:04:12.611472  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611480  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611486  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611495  130743 command_runner.go:130] >     },
	I0804 02:04:12.611503  130743 command_runner.go:130] >     {
	I0804 02:04:12.611515  130743 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0804 02:04:12.611532  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611543  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0804 02:04:12.611552  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611561  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611574  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0804 02:04:12.611588  130743 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0804 02:04:12.611597  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611607  130743 command_runner.go:130] >       "size": "63051080",
	I0804 02:04:12.611616  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611625  130743 command_runner.go:130] >         "value": "0"
	I0804 02:04:12.611633  130743 command_runner.go:130] >       },
	I0804 02:04:12.611638  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611641  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611648  130743 command_runner.go:130] >       "pinned": false
	I0804 02:04:12.611652  130743 command_runner.go:130] >     },
	I0804 02:04:12.611659  130743 command_runner.go:130] >     {
	I0804 02:04:12.611664  130743 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0804 02:04:12.611668  130743 command_runner.go:130] >       "repoTags": [
	I0804 02:04:12.611672  130743 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0804 02:04:12.611675  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611679  130743 command_runner.go:130] >       "repoDigests": [
	I0804 02:04:12.611685  130743 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0804 02:04:12.611691  130743 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0804 02:04:12.611696  130743 command_runner.go:130] >       ],
	I0804 02:04:12.611703  130743 command_runner.go:130] >       "size": "750414",
	I0804 02:04:12.611709  130743 command_runner.go:130] >       "uid": {
	I0804 02:04:12.611714  130743 command_runner.go:130] >         "value": "65535"
	I0804 02:04:12.611719  130743 command_runner.go:130] >       },
	I0804 02:04:12.611725  130743 command_runner.go:130] >       "username": "",
	I0804 02:04:12.611731  130743 command_runner.go:130] >       "spec": null,
	I0804 02:04:12.611740  130743 command_runner.go:130] >       "pinned": true
	I0804 02:04:12.611746  130743 command_runner.go:130] >     }
	I0804 02:04:12.611754  130743 command_runner.go:130] >   ]
	I0804 02:04:12.611760  130743 command_runner.go:130] > }
	I0804 02:04:12.611910  130743 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 02:04:12.611929  130743 cache_images.go:84] Images are preloaded, skipping loading
	I0804 02:04:12.611938  130743 kubeadm.go:934] updating node { 192.168.39.183 8443 v1.30.3 crio true true} ...
	I0804 02:04:12.612041  130743 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-229184 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-229184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 02:04:12.612111  130743 ssh_runner.go:195] Run: crio config
	I0804 02:04:12.658089  130743 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0804 02:04:12.658132  130743 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0804 02:04:12.658144  130743 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0804 02:04:12.658149  130743 command_runner.go:130] > #
	I0804 02:04:12.658163  130743 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0804 02:04:12.658174  130743 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0804 02:04:12.658184  130743 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0804 02:04:12.658206  130743 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0804 02:04:12.658212  130743 command_runner.go:130] > # reload'.
	I0804 02:04:12.658236  130743 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0804 02:04:12.658250  130743 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0804 02:04:12.658259  130743 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0804 02:04:12.658268  130743 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0804 02:04:12.658276  130743 command_runner.go:130] > [crio]
	I0804 02:04:12.658285  130743 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0804 02:04:12.658296  130743 command_runner.go:130] > # containers images, in this directory.
	I0804 02:04:12.658304  130743 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0804 02:04:12.658318  130743 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0804 02:04:12.658329  130743 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0804 02:04:12.658341  130743 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0804 02:04:12.658351  130743 command_runner.go:130] > # imagestore = ""
	I0804 02:04:12.658359  130743 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0804 02:04:12.658370  130743 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0804 02:04:12.658380  130743 command_runner.go:130] > storage_driver = "overlay"
	I0804 02:04:12.658389  130743 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0804 02:04:12.658401  130743 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0804 02:04:12.658407  130743 command_runner.go:130] > storage_option = [
	I0804 02:04:12.658417  130743 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0804 02:04:12.658423  130743 command_runner.go:130] > ]
	I0804 02:04:12.658435  130743 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0804 02:04:12.658447  130743 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0804 02:04:12.658456  130743 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0804 02:04:12.658465  130743 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0804 02:04:12.658477  130743 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0804 02:04:12.658484  130743 command_runner.go:130] > # always happen on a node reboot
	I0804 02:04:12.658492  130743 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0804 02:04:12.658505  130743 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0804 02:04:12.658520  130743 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0804 02:04:12.658532  130743 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0804 02:04:12.658543  130743 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0804 02:04:12.658559  130743 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0804 02:04:12.658575  130743 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0804 02:04:12.658586  130743 command_runner.go:130] > # internal_wipe = true
	I0804 02:04:12.658599  130743 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0804 02:04:12.658610  130743 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0804 02:04:12.658616  130743 command_runner.go:130] > # internal_repair = false
	I0804 02:04:12.658626  130743 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0804 02:04:12.658636  130743 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0804 02:04:12.658648  130743 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0804 02:04:12.658659  130743 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0804 02:04:12.658671  130743 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0804 02:04:12.658680  130743 command_runner.go:130] > [crio.api]
	I0804 02:04:12.658688  130743 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0804 02:04:12.658699  130743 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0804 02:04:12.658712  130743 command_runner.go:130] > # IP address on which the stream server will listen.
	I0804 02:04:12.658723  130743 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0804 02:04:12.658735  130743 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0804 02:04:12.658745  130743 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0804 02:04:12.658752  130743 command_runner.go:130] > # stream_port = "0"
	I0804 02:04:12.658761  130743 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0804 02:04:12.658771  130743 command_runner.go:130] > # stream_enable_tls = false
	I0804 02:04:12.658780  130743 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0804 02:04:12.658789  130743 command_runner.go:130] > # stream_idle_timeout = ""
	I0804 02:04:12.658798  130743 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0804 02:04:12.658808  130743 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0804 02:04:12.658815  130743 command_runner.go:130] > # minutes.
	I0804 02:04:12.658821  130743 command_runner.go:130] > # stream_tls_cert = ""
	I0804 02:04:12.658833  130743 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0804 02:04:12.658842  130743 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0804 02:04:12.658851  130743 command_runner.go:130] > # stream_tls_key = ""
	I0804 02:04:12.658860  130743 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0804 02:04:12.658872  130743 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0804 02:04:12.658895  130743 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0804 02:04:12.658904  130743 command_runner.go:130] > # stream_tls_ca = ""
	I0804 02:04:12.658916  130743 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0804 02:04:12.658925  130743 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0804 02:04:12.658936  130743 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0804 02:04:12.658947  130743 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0804 02:04:12.658957  130743 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0804 02:04:12.658968  130743 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0804 02:04:12.658977  130743 command_runner.go:130] > [crio.runtime]
	I0804 02:04:12.658985  130743 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0804 02:04:12.658997  130743 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0804 02:04:12.659006  130743 command_runner.go:130] > # "nofile=1024:2048"
	I0804 02:04:12.659014  130743 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0804 02:04:12.659022  130743 command_runner.go:130] > # default_ulimits = [
	I0804 02:04:12.659025  130743 command_runner.go:130] > # ]
	I0804 02:04:12.659031  130743 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0804 02:04:12.659037  130743 command_runner.go:130] > # no_pivot = false
	I0804 02:04:12.659047  130743 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0804 02:04:12.659059  130743 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0804 02:04:12.659066  130743 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0804 02:04:12.659078  130743 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0804 02:04:12.659088  130743 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0804 02:04:12.659097  130743 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0804 02:04:12.659112  130743 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0804 02:04:12.659119  130743 command_runner.go:130] > # Cgroup setting for conmon
	I0804 02:04:12.659131  130743 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0804 02:04:12.659140  130743 command_runner.go:130] > conmon_cgroup = "pod"
	I0804 02:04:12.659150  130743 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0804 02:04:12.659161  130743 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0804 02:04:12.659171  130743 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0804 02:04:12.659179  130743 command_runner.go:130] > conmon_env = [
	I0804 02:04:12.659190  130743 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0804 02:04:12.659200  130743 command_runner.go:130] > ]
	I0804 02:04:12.659208  130743 command_runner.go:130] > # Additional environment variables to set for all the
	I0804 02:04:12.659220  130743 command_runner.go:130] > # containers. These are overridden if set in the
	I0804 02:04:12.659229  130743 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0804 02:04:12.659238  130743 command_runner.go:130] > # default_env = [
	I0804 02:04:12.659243  130743 command_runner.go:130] > # ]
	I0804 02:04:12.659254  130743 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0804 02:04:12.659268  130743 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0804 02:04:12.659276  130743 command_runner.go:130] > # selinux = false
	I0804 02:04:12.659285  130743 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0804 02:04:12.659298  130743 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0804 02:04:12.659307  130743 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0804 02:04:12.659316  130743 command_runner.go:130] > # seccomp_profile = ""
	I0804 02:04:12.659324  130743 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0804 02:04:12.659338  130743 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0804 02:04:12.659348  130743 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0804 02:04:12.659358  130743 command_runner.go:130] > # which might increase security.
	I0804 02:04:12.659366  130743 command_runner.go:130] > # This option is currently deprecated,
	I0804 02:04:12.659378  130743 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0804 02:04:12.659388  130743 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0804 02:04:12.659397  130743 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0804 02:04:12.659412  130743 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0804 02:04:12.659425  130743 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0804 02:04:12.659438  130743 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0804 02:04:12.659447  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.659459  130743 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0804 02:04:12.659468  130743 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0804 02:04:12.659476  130743 command_runner.go:130] > # the cgroup blockio controller.
	I0804 02:04:12.659483  130743 command_runner.go:130] > # blockio_config_file = ""
	I0804 02:04:12.659496  130743 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0804 02:04:12.659505  130743 command_runner.go:130] > # blockio parameters.
	I0804 02:04:12.659511  130743 command_runner.go:130] > # blockio_reload = false
	I0804 02:04:12.659522  130743 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0804 02:04:12.659532  130743 command_runner.go:130] > # irqbalance daemon.
	I0804 02:04:12.659540  130743 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0804 02:04:12.659550  130743 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0804 02:04:12.659565  130743 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0804 02:04:12.659579  130743 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0804 02:04:12.659592  130743 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0804 02:04:12.659607  130743 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0804 02:04:12.659617  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.659627  130743 command_runner.go:130] > # rdt_config_file = ""
	I0804 02:04:12.659635  130743 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0804 02:04:12.659646  130743 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0804 02:04:12.659674  130743 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0804 02:04:12.659684  130743 command_runner.go:130] > # separate_pull_cgroup = ""
	I0804 02:04:12.659694  130743 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0804 02:04:12.659706  130743 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0804 02:04:12.659714  130743 command_runner.go:130] > # will be added.
	I0804 02:04:12.659721  130743 command_runner.go:130] > # default_capabilities = [
	I0804 02:04:12.659727  130743 command_runner.go:130] > # 	"CHOWN",
	I0804 02:04:12.659736  130743 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0804 02:04:12.659742  130743 command_runner.go:130] > # 	"FSETID",
	I0804 02:04:12.659751  130743 command_runner.go:130] > # 	"FOWNER",
	I0804 02:04:12.659757  130743 command_runner.go:130] > # 	"SETGID",
	I0804 02:04:12.659766  130743 command_runner.go:130] > # 	"SETUID",
	I0804 02:04:12.659773  130743 command_runner.go:130] > # 	"SETPCAP",
	I0804 02:04:12.659785  130743 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0804 02:04:12.659794  130743 command_runner.go:130] > # 	"KILL",
	I0804 02:04:12.659800  130743 command_runner.go:130] > # ]
	I0804 02:04:12.659818  130743 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0804 02:04:12.659832  130743 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0804 02:04:12.659843  130743 command_runner.go:130] > # add_inheritable_capabilities = false
	I0804 02:04:12.659856  130743 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0804 02:04:12.659867  130743 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0804 02:04:12.659875  130743 command_runner.go:130] > default_sysctls = [
	I0804 02:04:12.659884  130743 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0804 02:04:12.659892  130743 command_runner.go:130] > ]
	I0804 02:04:12.659899  130743 command_runner.go:130] > # List of devices on the host that a
	I0804 02:04:12.659911  130743 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0804 02:04:12.659920  130743 command_runner.go:130] > # allowed_devices = [
	I0804 02:04:12.659926  130743 command_runner.go:130] > # 	"/dev/fuse",
	I0804 02:04:12.659935  130743 command_runner.go:130] > # ]
	I0804 02:04:12.659943  130743 command_runner.go:130] > # List of additional devices. specified as
	I0804 02:04:12.659958  130743 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0804 02:04:12.659970  130743 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0804 02:04:12.659982  130743 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0804 02:04:12.659991  130743 command_runner.go:130] > # additional_devices = [
	I0804 02:04:12.659997  130743 command_runner.go:130] > # ]
	I0804 02:04:12.660006  130743 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0804 02:04:12.660017  130743 command_runner.go:130] > # cdi_spec_dirs = [
	I0804 02:04:12.660027  130743 command_runner.go:130] > # 	"/etc/cdi",
	I0804 02:04:12.660034  130743 command_runner.go:130] > # 	"/var/run/cdi",
	I0804 02:04:12.660039  130743 command_runner.go:130] > # ]
	I0804 02:04:12.660049  130743 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0804 02:04:12.660059  130743 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0804 02:04:12.660065  130743 command_runner.go:130] > # Defaults to false.
	I0804 02:04:12.660074  130743 command_runner.go:130] > # device_ownership_from_security_context = false
	I0804 02:04:12.660090  130743 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0804 02:04:12.660103  130743 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0804 02:04:12.660123  130743 command_runner.go:130] > # hooks_dir = [
	I0804 02:04:12.660132  130743 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0804 02:04:12.660140  130743 command_runner.go:130] > # ]
	I0804 02:04:12.660151  130743 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0804 02:04:12.660164  130743 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0804 02:04:12.660175  130743 command_runner.go:130] > # its default mounts from the following two files:
	I0804 02:04:12.660183  130743 command_runner.go:130] > #
	I0804 02:04:12.660193  130743 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0804 02:04:12.660205  130743 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0804 02:04:12.660215  130743 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0804 02:04:12.660225  130743 command_runner.go:130] > #
	I0804 02:04:12.660236  130743 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0804 02:04:12.660249  130743 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0804 02:04:12.660261  130743 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0804 02:04:12.660273  130743 command_runner.go:130] > #      only add mounts it finds in this file.
	I0804 02:04:12.660281  130743 command_runner.go:130] > #
	I0804 02:04:12.660288  130743 command_runner.go:130] > # default_mounts_file = ""
	I0804 02:04:12.660299  130743 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0804 02:04:12.660311  130743 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0804 02:04:12.660317  130743 command_runner.go:130] > pids_limit = 1024
	I0804 02:04:12.660329  130743 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0804 02:04:12.660342  130743 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0804 02:04:12.660355  130743 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0804 02:04:12.660370  130743 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0804 02:04:12.660380  130743 command_runner.go:130] > # log_size_max = -1
	I0804 02:04:12.660390  130743 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0804 02:04:12.660400  130743 command_runner.go:130] > # log_to_journald = false
	I0804 02:04:12.660409  130743 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0804 02:04:12.660422  130743 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0804 02:04:12.660434  130743 command_runner.go:130] > # Path to directory for container attach sockets.
	I0804 02:04:12.660444  130743 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0804 02:04:12.660452  130743 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0804 02:04:12.660461  130743 command_runner.go:130] > # bind_mount_prefix = ""
	I0804 02:04:12.660470  130743 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0804 02:04:12.660479  130743 command_runner.go:130] > # read_only = false
	I0804 02:04:12.660491  130743 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0804 02:04:12.660502  130743 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0804 02:04:12.660510  130743 command_runner.go:130] > # live configuration reload.
	I0804 02:04:12.660519  130743 command_runner.go:130] > # log_level = "info"
	I0804 02:04:12.660530  130743 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0804 02:04:12.660543  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.660553  130743 command_runner.go:130] > # log_filter = ""
	I0804 02:04:12.660564  130743 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0804 02:04:12.660582  130743 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0804 02:04:12.660591  130743 command_runner.go:130] > # separated by comma.
	I0804 02:04:12.660602  130743 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 02:04:12.660612  130743 command_runner.go:130] > # uid_mappings = ""
	I0804 02:04:12.660622  130743 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0804 02:04:12.660632  130743 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0804 02:04:12.660638  130743 command_runner.go:130] > # separated by comma.
	I0804 02:04:12.660650  130743 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 02:04:12.660660  130743 command_runner.go:130] > # gid_mappings = ""
	I0804 02:04:12.660670  130743 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0804 02:04:12.660683  130743 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0804 02:04:12.660693  130743 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0804 02:04:12.660710  130743 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 02:04:12.660722  130743 command_runner.go:130] > # minimum_mappable_uid = -1
	I0804 02:04:12.660732  130743 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0804 02:04:12.660745  130743 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0804 02:04:12.660760  130743 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0804 02:04:12.660772  130743 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0804 02:04:12.660779  130743 command_runner.go:130] > # minimum_mappable_gid = -1
	I0804 02:04:12.660790  130743 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0804 02:04:12.660804  130743 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0804 02:04:12.660812  130743 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0804 02:04:12.660823  130743 command_runner.go:130] > # ctr_stop_timeout = 30
	I0804 02:04:12.660833  130743 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0804 02:04:12.660846  130743 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0804 02:04:12.660860  130743 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0804 02:04:12.660871  130743 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0804 02:04:12.660881  130743 command_runner.go:130] > drop_infra_ctr = false
	I0804 02:04:12.660890  130743 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0804 02:04:12.660904  130743 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0804 02:04:12.660915  130743 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0804 02:04:12.660922  130743 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0804 02:04:12.660935  130743 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0804 02:04:12.660948  130743 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0804 02:04:12.660959  130743 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0804 02:04:12.660970  130743 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0804 02:04:12.660980  130743 command_runner.go:130] > # shared_cpuset = ""
	I0804 02:04:12.660990  130743 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0804 02:04:12.661001  130743 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0804 02:04:12.661012  130743 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0804 02:04:12.661022  130743 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0804 02:04:12.661032  130743 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0804 02:04:12.661042  130743 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0804 02:04:12.661054  130743 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0804 02:04:12.661063  130743 command_runner.go:130] > # enable_criu_support = false
	I0804 02:04:12.661076  130743 command_runner.go:130] > # Enable/disable the generation of the container,
	I0804 02:04:12.661089  130743 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0804 02:04:12.661099  130743 command_runner.go:130] > # enable_pod_events = false
	I0804 02:04:12.661117  130743 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0804 02:04:12.661132  130743 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0804 02:04:12.661145  130743 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0804 02:04:12.661155  130743 command_runner.go:130] > # default_runtime = "runc"
	I0804 02:04:12.661163  130743 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0804 02:04:12.661177  130743 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0804 02:04:12.661195  130743 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0804 02:04:12.661207  130743 command_runner.go:130] > # creation as a file is not desired either.
	I0804 02:04:12.661223  130743 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0804 02:04:12.661234  130743 command_runner.go:130] > # the hostname is being managed dynamically.
	I0804 02:04:12.661240  130743 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0804 02:04:12.661248  130743 command_runner.go:130] > # ]
	I0804 02:04:12.661258  130743 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0804 02:04:12.661271  130743 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0804 02:04:12.661283  130743 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0804 02:04:12.661295  130743 command_runner.go:130] > # Each entry in the table should follow the format:
	I0804 02:04:12.661303  130743 command_runner.go:130] > #
	I0804 02:04:12.661310  130743 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0804 02:04:12.661321  130743 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0804 02:04:12.661368  130743 command_runner.go:130] > # runtime_type = "oci"
	I0804 02:04:12.661393  130743 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0804 02:04:12.661402  130743 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0804 02:04:12.661414  130743 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0804 02:04:12.661424  130743 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0804 02:04:12.661433  130743 command_runner.go:130] > # monitor_env = []
	I0804 02:04:12.661440  130743 command_runner.go:130] > # privileged_without_host_devices = false
	I0804 02:04:12.661451  130743 command_runner.go:130] > # allowed_annotations = []
	I0804 02:04:12.661460  130743 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0804 02:04:12.661468  130743 command_runner.go:130] > # Where:
	I0804 02:04:12.661478  130743 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0804 02:04:12.661491  130743 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0804 02:04:12.661504  130743 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0804 02:04:12.661517  130743 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0804 02:04:12.661526  130743 command_runner.go:130] > #   in $PATH.
	I0804 02:04:12.661537  130743 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0804 02:04:12.661548  130743 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0804 02:04:12.661558  130743 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0804 02:04:12.661566  130743 command_runner.go:130] > #   state.
	I0804 02:04:12.661578  130743 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0804 02:04:12.661591  130743 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0804 02:04:12.661603  130743 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0804 02:04:12.661614  130743 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0804 02:04:12.661623  130743 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0804 02:04:12.661635  130743 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0804 02:04:12.661643  130743 command_runner.go:130] > #   The currently recognized values are:
	I0804 02:04:12.661656  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0804 02:04:12.661670  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0804 02:04:12.661682  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0804 02:04:12.661694  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0804 02:04:12.661708  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0804 02:04:12.661722  130743 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0804 02:04:12.661734  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0804 02:04:12.661749  130743 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0804 02:04:12.661763  130743 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0804 02:04:12.661775  130743 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0804 02:04:12.661784  130743 command_runner.go:130] > #   deprecated option "conmon".
	I0804 02:04:12.661803  130743 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0804 02:04:12.661813  130743 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0804 02:04:12.661824  130743 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0804 02:04:12.661835  130743 command_runner.go:130] > #   should be moved to the container's cgroup
	I0804 02:04:12.661848  130743 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0804 02:04:12.661859  130743 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0804 02:04:12.661869  130743 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0804 02:04:12.661880  130743 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0804 02:04:12.661887  130743 command_runner.go:130] > #
	I0804 02:04:12.661894  130743 command_runner.go:130] > # Using the seccomp notifier feature:
	I0804 02:04:12.661903  130743 command_runner.go:130] > #
	I0804 02:04:12.661911  130743 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0804 02:04:12.661924  130743 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0804 02:04:12.661932  130743 command_runner.go:130] > #
	I0804 02:04:12.661942  130743 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0804 02:04:12.661955  130743 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0804 02:04:12.661963  130743 command_runner.go:130] > #
	I0804 02:04:12.661973  130743 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0804 02:04:12.661981  130743 command_runner.go:130] > # feature.
	I0804 02:04:12.661986  130743 command_runner.go:130] > #
	I0804 02:04:12.661998  130743 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0804 02:04:12.662010  130743 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0804 02:04:12.662023  130743 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0804 02:04:12.662034  130743 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0804 02:04:12.662046  130743 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0804 02:04:12.662054  130743 command_runner.go:130] > #
	I0804 02:04:12.662061  130743 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0804 02:04:12.662073  130743 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0804 02:04:12.662081  130743 command_runner.go:130] > #
	I0804 02:04:12.662090  130743 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0804 02:04:12.662101  130743 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0804 02:04:12.662114  130743 command_runner.go:130] > #
	I0804 02:04:12.662124  130743 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0804 02:04:12.662134  130743 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0804 02:04:12.662142  130743 command_runner.go:130] > # limitation.
	I0804 02:04:12.662153  130743 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0804 02:04:12.662164  130743 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0804 02:04:12.662174  130743 command_runner.go:130] > runtime_type = "oci"
	I0804 02:04:12.662180  130743 command_runner.go:130] > runtime_root = "/run/runc"
	I0804 02:04:12.662189  130743 command_runner.go:130] > runtime_config_path = ""
	I0804 02:04:12.662195  130743 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0804 02:04:12.662205  130743 command_runner.go:130] > monitor_cgroup = "pod"
	I0804 02:04:12.662214  130743 command_runner.go:130] > monitor_exec_cgroup = ""
	I0804 02:04:12.662223  130743 command_runner.go:130] > monitor_env = [
	I0804 02:04:12.662231  130743 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0804 02:04:12.662239  130743 command_runner.go:130] > ]
	I0804 02:04:12.662246  130743 command_runner.go:130] > privileged_without_host_devices = false
	I0804 02:04:12.662258  130743 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0804 02:04:12.662269  130743 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0804 02:04:12.662281  130743 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0804 02:04:12.662296  130743 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0804 02:04:12.662310  130743 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0804 02:04:12.662321  130743 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0804 02:04:12.662338  130743 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0804 02:04:12.662354  130743 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0804 02:04:12.662367  130743 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0804 02:04:12.662379  130743 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0804 02:04:12.662387  130743 command_runner.go:130] > # Example:
	I0804 02:04:12.662396  130743 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0804 02:04:12.662403  130743 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0804 02:04:12.662411  130743 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0804 02:04:12.662419  130743 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0804 02:04:12.662424  130743 command_runner.go:130] > # cpuset = 0
	I0804 02:04:12.662430  130743 command_runner.go:130] > # cpushares = "0-1"
	I0804 02:04:12.662435  130743 command_runner.go:130] > # Where:
	I0804 02:04:12.662441  130743 command_runner.go:130] > # The workload name is workload-type.
	I0804 02:04:12.662448  130743 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0804 02:04:12.662453  130743 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0804 02:04:12.662458  130743 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0804 02:04:12.662465  130743 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0804 02:04:12.662471  130743 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0804 02:04:12.662476  130743 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0804 02:04:12.662483  130743 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0804 02:04:12.662488  130743 command_runner.go:130] > # Default value is set to true
	I0804 02:04:12.662492  130743 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0804 02:04:12.662497  130743 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0804 02:04:12.662501  130743 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0804 02:04:12.662505  130743 command_runner.go:130] > # Default value is set to 'false'
	I0804 02:04:12.662509  130743 command_runner.go:130] > # disable_hostport_mapping = false
	I0804 02:04:12.662515  130743 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0804 02:04:12.662518  130743 command_runner.go:130] > #
	I0804 02:04:12.662524  130743 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0804 02:04:12.662531  130743 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0804 02:04:12.662537  130743 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0804 02:04:12.662543  130743 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0804 02:04:12.662548  130743 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0804 02:04:12.662552  130743 command_runner.go:130] > [crio.image]
	I0804 02:04:12.662557  130743 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0804 02:04:12.662561  130743 command_runner.go:130] > # default_transport = "docker://"
	I0804 02:04:12.662566  130743 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0804 02:04:12.662572  130743 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0804 02:04:12.662576  130743 command_runner.go:130] > # global_auth_file = ""
	I0804 02:04:12.662580  130743 command_runner.go:130] > # The image used to instantiate infra containers.
	I0804 02:04:12.662585  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.662589  130743 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0804 02:04:12.662595  130743 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0804 02:04:12.662601  130743 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0804 02:04:12.662608  130743 command_runner.go:130] > # This option supports live configuration reload.
	I0804 02:04:12.662612  130743 command_runner.go:130] > # pause_image_auth_file = ""
	I0804 02:04:12.662619  130743 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0804 02:04:12.662625  130743 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0804 02:04:12.662634  130743 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0804 02:04:12.662642  130743 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0804 02:04:12.662648  130743 command_runner.go:130] > # pause_command = "/pause"
	I0804 02:04:12.662654  130743 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0804 02:04:12.662662  130743 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0804 02:04:12.662668  130743 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0804 02:04:12.662678  130743 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0804 02:04:12.662686  130743 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0804 02:04:12.662692  130743 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0804 02:04:12.662698  130743 command_runner.go:130] > # pinned_images = [
	I0804 02:04:12.662702  130743 command_runner.go:130] > # ]
	I0804 02:04:12.662708  130743 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0804 02:04:12.662717  130743 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0804 02:04:12.662723  130743 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0804 02:04:12.662731  130743 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0804 02:04:12.662736  130743 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0804 02:04:12.662741  130743 command_runner.go:130] > # signature_policy = ""
	I0804 02:04:12.662747  130743 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0804 02:04:12.662755  130743 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0804 02:04:12.662762  130743 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0804 02:04:12.662769  130743 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0804 02:04:12.662774  130743 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0804 02:04:12.662779  130743 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0804 02:04:12.662785  130743 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0804 02:04:12.662793  130743 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0804 02:04:12.662797  130743 command_runner.go:130] > # changing them here.
	I0804 02:04:12.662802  130743 command_runner.go:130] > # insecure_registries = [
	I0804 02:04:12.662805  130743 command_runner.go:130] > # ]
	I0804 02:04:12.662813  130743 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0804 02:04:12.662818  130743 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0804 02:04:12.662824  130743 command_runner.go:130] > # image_volumes = "mkdir"
	I0804 02:04:12.662829  130743 command_runner.go:130] > # Temporary directory to use for storing big files
	I0804 02:04:12.662833  130743 command_runner.go:130] > # big_files_temporary_dir = ""
	I0804 02:04:12.662839  130743 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0804 02:04:12.662845  130743 command_runner.go:130] > # CNI plugins.
	I0804 02:04:12.662849  130743 command_runner.go:130] > [crio.network]
	I0804 02:04:12.662858  130743 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0804 02:04:12.662866  130743 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0804 02:04:12.662870  130743 command_runner.go:130] > # cni_default_network = ""
	I0804 02:04:12.662876  130743 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0804 02:04:12.662881  130743 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0804 02:04:12.662886  130743 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0804 02:04:12.662891  130743 command_runner.go:130] > # plugin_dirs = [
	I0804 02:04:12.662897  130743 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0804 02:04:12.662901  130743 command_runner.go:130] > # ]
	I0804 02:04:12.662907  130743 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0804 02:04:12.662912  130743 command_runner.go:130] > [crio.metrics]
	I0804 02:04:12.662917  130743 command_runner.go:130] > # Globally enable or disable metrics support.
	I0804 02:04:12.662923  130743 command_runner.go:130] > enable_metrics = true
	I0804 02:04:12.662927  130743 command_runner.go:130] > # Specify enabled metrics collectors.
	I0804 02:04:12.662934  130743 command_runner.go:130] > # Per default all metrics are enabled.
	I0804 02:04:12.662941  130743 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0804 02:04:12.662949  130743 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0804 02:04:12.662955  130743 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0804 02:04:12.662961  130743 command_runner.go:130] > # metrics_collectors = [
	I0804 02:04:12.662965  130743 command_runner.go:130] > # 	"operations",
	I0804 02:04:12.662969  130743 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0804 02:04:12.662973  130743 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0804 02:04:12.662978  130743 command_runner.go:130] > # 	"operations_errors",
	I0804 02:04:12.662981  130743 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0804 02:04:12.662986  130743 command_runner.go:130] > # 	"image_pulls_by_name",
	I0804 02:04:12.662991  130743 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0804 02:04:12.662998  130743 command_runner.go:130] > # 	"image_pulls_failures",
	I0804 02:04:12.663002  130743 command_runner.go:130] > # 	"image_pulls_successes",
	I0804 02:04:12.663007  130743 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0804 02:04:12.663010  130743 command_runner.go:130] > # 	"image_layer_reuse",
	I0804 02:04:12.663015  130743 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0804 02:04:12.663018  130743 command_runner.go:130] > # 	"containers_oom_total",
	I0804 02:04:12.663022  130743 command_runner.go:130] > # 	"containers_oom",
	I0804 02:04:12.663026  130743 command_runner.go:130] > # 	"processes_defunct",
	I0804 02:04:12.663031  130743 command_runner.go:130] > # 	"operations_total",
	I0804 02:04:12.663035  130743 command_runner.go:130] > # 	"operations_latency_seconds",
	I0804 02:04:12.663041  130743 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0804 02:04:12.663046  130743 command_runner.go:130] > # 	"operations_errors_total",
	I0804 02:04:12.663052  130743 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0804 02:04:12.663056  130743 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0804 02:04:12.663062  130743 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0804 02:04:12.663067  130743 command_runner.go:130] > # 	"image_pulls_success_total",
	I0804 02:04:12.663070  130743 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0804 02:04:12.663075  130743 command_runner.go:130] > # 	"containers_oom_count_total",
	I0804 02:04:12.663080  130743 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0804 02:04:12.663084  130743 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0804 02:04:12.663087  130743 command_runner.go:130] > # ]
	I0804 02:04:12.663092  130743 command_runner.go:130] > # The port on which the metrics server will listen.
	I0804 02:04:12.663098  130743 command_runner.go:130] > # metrics_port = 9090
	I0804 02:04:12.663102  130743 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0804 02:04:12.663109  130743 command_runner.go:130] > # metrics_socket = ""
	I0804 02:04:12.663117  130743 command_runner.go:130] > # The certificate for the secure metrics server.
	I0804 02:04:12.663123  130743 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0804 02:04:12.663131  130743 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0804 02:04:12.663135  130743 command_runner.go:130] > # certificate on any modification event.
	I0804 02:04:12.663139  130743 command_runner.go:130] > # metrics_cert = ""
	I0804 02:04:12.663145  130743 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0804 02:04:12.663150  130743 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0804 02:04:12.663154  130743 command_runner.go:130] > # metrics_key = ""
	I0804 02:04:12.663159  130743 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0804 02:04:12.663165  130743 command_runner.go:130] > [crio.tracing]
	I0804 02:04:12.663171  130743 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0804 02:04:12.663176  130743 command_runner.go:130] > # enable_tracing = false
	I0804 02:04:12.663181  130743 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0804 02:04:12.663188  130743 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0804 02:04:12.663195  130743 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0804 02:04:12.663201  130743 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0804 02:04:12.663206  130743 command_runner.go:130] > # CRI-O NRI configuration.
	I0804 02:04:12.663211  130743 command_runner.go:130] > [crio.nri]
	I0804 02:04:12.663215  130743 command_runner.go:130] > # Globally enable or disable NRI.
	I0804 02:04:12.663219  130743 command_runner.go:130] > # enable_nri = false
	I0804 02:04:12.663223  130743 command_runner.go:130] > # NRI socket to listen on.
	I0804 02:04:12.663227  130743 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0804 02:04:12.663236  130743 command_runner.go:130] > # NRI plugin directory to use.
	I0804 02:04:12.663243  130743 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0804 02:04:12.663254  130743 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0804 02:04:12.663263  130743 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0804 02:04:12.663272  130743 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0804 02:04:12.663281  130743 command_runner.go:130] > # nri_disable_connections = false
	I0804 02:04:12.663299  130743 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0804 02:04:12.663310  130743 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0804 02:04:12.663316  130743 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0804 02:04:12.663322  130743 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0804 02:04:12.663328  130743 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0804 02:04:12.663334  130743 command_runner.go:130] > [crio.stats]
	I0804 02:04:12.663340  130743 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0804 02:04:12.663347  130743 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0804 02:04:12.663351  130743 command_runner.go:130] > # stats_collection_period = 0
	I0804 02:04:12.663968  130743 command_runner.go:130] ! time="2024-08-04 02:04:12.627368364Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0804 02:04:12.663997  130743 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0804 02:04:12.664170  130743 cni.go:84] Creating CNI manager for ""
	I0804 02:04:12.664183  130743 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0804 02:04:12.664193  130743 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 02:04:12.664214  130743 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-229184 NodeName:multinode-229184 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 02:04:12.664420  130743 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-229184"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 02:04:12.664485  130743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 02:04:12.675495  130743 command_runner.go:130] > kubeadm
	I0804 02:04:12.675516  130743 command_runner.go:130] > kubectl
	I0804 02:04:12.675520  130743 command_runner.go:130] > kubelet
	I0804 02:04:12.675549  130743 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 02:04:12.675601  130743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 02:04:12.686341  130743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0804 02:04:12.703403  130743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 02:04:12.720323  130743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0804 02:04:12.736973  130743 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I0804 02:04:12.740884  130743 command_runner.go:130] > 192.168.39.183	control-plane.minikube.internal
	I0804 02:04:12.740969  130743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 02:04:12.889057  130743 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 02:04:12.904426  130743 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184 for IP: 192.168.39.183
	I0804 02:04:12.904451  130743 certs.go:194] generating shared ca certs ...
	I0804 02:04:12.904465  130743 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:04:12.904644  130743 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 02:04:12.904729  130743 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 02:04:12.904742  130743 certs.go:256] generating profile certs ...
	I0804 02:04:12.904841  130743 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/client.key
	I0804 02:04:12.904920  130743 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.key.8b2c4c64
	I0804 02:04:12.904975  130743 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.key
	I0804 02:04:12.904994  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0804 02:04:12.905015  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0804 02:04:12.905033  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0804 02:04:12.905051  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0804 02:04:12.905067  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0804 02:04:12.905098  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0804 02:04:12.905116  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0804 02:04:12.905134  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0804 02:04:12.905199  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 02:04:12.905240  130743 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 02:04:12.905256  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 02:04:12.905286  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 02:04:12.905320  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 02:04:12.905350  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 02:04:12.905427  130743 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 02:04:12.905467  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:12.905487  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem -> /usr/share/ca-certificates/97407.pem
	I0804 02:04:12.905504  130743 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> /usr/share/ca-certificates/974072.pem
	I0804 02:04:12.906150  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 02:04:12.931137  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 02:04:12.956076  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 02:04:12.981546  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 02:04:13.006435  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0804 02:04:13.029966  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 02:04:13.055503  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 02:04:13.081867  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/multinode-229184/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 02:04:13.106886  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 02:04:13.131014  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 02:04:13.155467  130743 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 02:04:13.186244  130743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 02:04:13.219177  130743 ssh_runner.go:195] Run: openssl version
	I0804 02:04:13.232715  130743 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0804 02:04:13.232791  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 02:04:13.287380  130743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:13.296276  130743 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:13.296327  130743 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:13.296387  130743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:04:13.306244  130743 command_runner.go:130] > b5213941
	I0804 02:04:13.306359  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 02:04:13.320254  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 02:04:13.338752  130743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 02:04:13.343684  130743 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 02:04:13.343855  130743 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 02:04:13.343909  130743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 02:04:13.349982  130743 command_runner.go:130] > 51391683
	I0804 02:04:13.350224  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 02:04:13.360979  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 02:04:13.374801  130743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 02:04:13.384089  130743 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 02:04:13.384239  130743 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 02:04:13.384291  130743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 02:04:13.390521  130743 command_runner.go:130] > 3ec20f2e
	I0804 02:04:13.390592  130743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 02:04:13.417950  130743 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 02:04:13.423972  130743 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 02:04:13.424006  130743 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0804 02:04:13.424013  130743 command_runner.go:130] > Device: 253,1	Inode: 9433131     Links: 1
	I0804 02:04:13.424019  130743 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0804 02:04:13.424026  130743 command_runner.go:130] > Access: 2024-08-04 01:57:10.923482530 +0000
	I0804 02:04:13.424030  130743 command_runner.go:130] > Modify: 2024-08-04 01:57:10.923482530 +0000
	I0804 02:04:13.424035  130743 command_runner.go:130] > Change: 2024-08-04 01:57:10.923482530 +0000
	I0804 02:04:13.424040  130743 command_runner.go:130] >  Birth: 2024-08-04 01:57:10.923482530 +0000
	I0804 02:04:13.424111  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 02:04:13.432144  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.435416  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 02:04:13.450083  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.450356  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 02:04:13.457299  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.457655  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 02:04:13.464561  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.464691  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 02:04:13.472512  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.472643  130743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 02:04:13.482576  130743 command_runner.go:130] > Certificate will not expire
	I0804 02:04:13.482733  130743 kubeadm.go:392] StartCluster: {Name:multinode-229184 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-229184 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.130 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.152 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:04:13.482900  130743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 02:04:13.482970  130743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 02:04:13.540361  130743 command_runner.go:130] > b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c
	I0804 02:04:13.540387  130743 command_runner.go:130] > 19e85822cc0c4868dd92301e8ff26e66d1d874d9d1105ccf4cea0d34541573f1
	I0804 02:04:13.540393  130743 command_runner.go:130] > 0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e
	I0804 02:04:13.540411  130743 command_runner.go:130] > 68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb
	I0804 02:04:13.540419  130743 command_runner.go:130] > 3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6
	I0804 02:04:13.540429  130743 command_runner.go:130] > bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b
	I0804 02:04:13.540437  130743 command_runner.go:130] > 997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e
	I0804 02:04:13.540463  130743 command_runner.go:130] > b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a
	I0804 02:04:13.540475  130743 command_runner.go:130] > f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc
	I0804 02:04:13.542799  130743 cri.go:89] found id: "b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c"
	I0804 02:04:13.542820  130743 cri.go:89] found id: "19e85822cc0c4868dd92301e8ff26e66d1d874d9d1105ccf4cea0d34541573f1"
	I0804 02:04:13.542824  130743 cri.go:89] found id: "0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e"
	I0804 02:04:13.542827  130743 cri.go:89] found id: "68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb"
	I0804 02:04:13.542829  130743 cri.go:89] found id: "3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6"
	I0804 02:04:13.542832  130743 cri.go:89] found id: "bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b"
	I0804 02:04:13.542835  130743 cri.go:89] found id: "997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e"
	I0804 02:04:13.542838  130743 cri.go:89] found id: "b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a"
	I0804 02:04:13.542841  130743 cri.go:89] found id: "f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc"
	I0804 02:04:13.542846  130743 cri.go:89] found id: ""
	I0804 02:04:13.542891  130743 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.511974049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722737306511948783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0af5574e-f57b-412d-a3fc-bd99421ed39e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.512535838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8359f49-97dd-4197-a1fd-a562815f4fa0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.512608680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8359f49-97dd-4197-a1fd-a562815f4fa0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.513041904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5779444b83313779cfe35c1c1e8cdbcb4dc33e22d1707d372e59a152713519,PodSandboxId:ac217b95bd857dd46870cb52cfe9a3af2dd715b40f766080eaa262deaeb87505,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722737091191312821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722737066493126365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175,PodSandboxId:7052ee9c14022804099b61be920796b7c44e7ce28fee5f05f3cc9dca0e05fa09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722737058307746152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7
a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f62ca3dc87b1d55d1e7581ef02b8a673ac64ef60b2a5b773b821dd8eb68e22,PodSandboxId:187b59dc0d2555548baf408a7377a3d6dc8012bbd166b49f4503798ecf22bfff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722737057988087887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},A
nnotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9,PodSandboxId:a69d4dc36c963700445f8ea55778c190b275dc1ff71c60228df1aadcb82a477f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722737057872022854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.ku
bernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738,PodSandboxId:12511f4c9f62542117eadbac185c1d4ac7f808f486a9a17d70683e6d0d95a2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722737057659260339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5,PodSandboxId:0b8d18faaf50fabb0f6e0f8eefb5f5a8dce93f3ade9bd44388f99eca0bee6e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722737057652976760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotations:map[string]string{io.kubernetes.container.hash: be863e
03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b,PodSandboxId:efd1f26e59a206be34098b25a32cacfc8cc4bddc577d1bf865e04732224c613b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722737057570464687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:map[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf,PodSandboxId:35cc64ddca94fb5b2044e4cdd2cd0d9da22b51749d10c5c3848bdf8a650f6478,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722737057453814004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722737053391526637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451755c9cae308862cc45dc834fd0544214391121372e8cfe19cb08fbc1e582f,PodSandboxId:9091e3232b4e4c61b5a0f7ca9d22dae51d7726484ce11102aed2f4f347a28d0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722736729040610772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e,PodSandboxId:2ff7b863562642710d449f303b9798cdb87b3a9cb80e48efaf9721781347fe4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722736669916495479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb,PodSandboxId:f14c29a7d94b4927bf72f76b367543d9a40f8181f1e07d9fdf876b83300ea60b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722736657937507958,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.kubernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6,PodSandboxId:f2e81613fe5ae2e71ee14f1b4d6fa5c59a00b1a2682ddd5fef092a507f507ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722736654134846162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e,PodSandboxId:5f06df713675f6bf928a9fc4849f46aa38d82f97b93ef78bc288760ae73d7f6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722736634756657985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a,PodSandboxId:7d2c2feafa63903e31519edfc8cf521d792380c3be4bae0ab6bc962b6509875f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722736634741132492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotation
s:map[string]string{io.kubernetes.container.hash: be863e03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b,PodSandboxId:4ba53ac02e903d556ba72f1d01291672d68cecf7e0a78fa1018c2aef70e094a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722736634791718840,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc,PodSandboxId:21908fae5b9cf1674a348dc5b96270ad7f0d1e7a0ba0b3f16f9fb2cb03c63f9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722736634710016617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8359f49-97dd-4197-a1fd-a562815f4fa0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.559731616Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ed53733-2d67-4ff4-9c28-b11716ea3f96 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.559805666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ed53733-2d67-4ff4-9c28-b11716ea3f96 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.561030562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3f044cf-15c9-4c6b-89f8-cfe7210fb481 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.561625104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722737306561603499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3f044cf-15c9-4c6b-89f8-cfe7210fb481 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.562192055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54ced2ee-6e88-4087-9125-082c760a2089 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.562245419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54ced2ee-6e88-4087-9125-082c760a2089 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.562787653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5779444b83313779cfe35c1c1e8cdbcb4dc33e22d1707d372e59a152713519,PodSandboxId:ac217b95bd857dd46870cb52cfe9a3af2dd715b40f766080eaa262deaeb87505,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722737091191312821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722737066493126365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175,PodSandboxId:7052ee9c14022804099b61be920796b7c44e7ce28fee5f05f3cc9dca0e05fa09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722737058307746152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7
a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f62ca3dc87b1d55d1e7581ef02b8a673ac64ef60b2a5b773b821dd8eb68e22,PodSandboxId:187b59dc0d2555548baf408a7377a3d6dc8012bbd166b49f4503798ecf22bfff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722737057988087887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},A
nnotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9,PodSandboxId:a69d4dc36c963700445f8ea55778c190b275dc1ff71c60228df1aadcb82a477f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722737057872022854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.ku
bernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738,PodSandboxId:12511f4c9f62542117eadbac185c1d4ac7f808f486a9a17d70683e6d0d95a2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722737057659260339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5,PodSandboxId:0b8d18faaf50fabb0f6e0f8eefb5f5a8dce93f3ade9bd44388f99eca0bee6e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722737057652976760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotations:map[string]string{io.kubernetes.container.hash: be863e
03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b,PodSandboxId:efd1f26e59a206be34098b25a32cacfc8cc4bddc577d1bf865e04732224c613b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722737057570464687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:map[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf,PodSandboxId:35cc64ddca94fb5b2044e4cdd2cd0d9da22b51749d10c5c3848bdf8a650f6478,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722737057453814004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722737053391526637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451755c9cae308862cc45dc834fd0544214391121372e8cfe19cb08fbc1e582f,PodSandboxId:9091e3232b4e4c61b5a0f7ca9d22dae51d7726484ce11102aed2f4f347a28d0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722736729040610772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e,PodSandboxId:2ff7b863562642710d449f303b9798cdb87b3a9cb80e48efaf9721781347fe4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722736669916495479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb,PodSandboxId:f14c29a7d94b4927bf72f76b367543d9a40f8181f1e07d9fdf876b83300ea60b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722736657937507958,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.kubernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6,PodSandboxId:f2e81613fe5ae2e71ee14f1b4d6fa5c59a00b1a2682ddd5fef092a507f507ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722736654134846162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e,PodSandboxId:5f06df713675f6bf928a9fc4849f46aa38d82f97b93ef78bc288760ae73d7f6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722736634756657985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a,PodSandboxId:7d2c2feafa63903e31519edfc8cf521d792380c3be4bae0ab6bc962b6509875f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722736634741132492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotation
s:map[string]string{io.kubernetes.container.hash: be863e03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b,PodSandboxId:4ba53ac02e903d556ba72f1d01291672d68cecf7e0a78fa1018c2aef70e094a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722736634791718840,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc,PodSandboxId:21908fae5b9cf1674a348dc5b96270ad7f0d1e7a0ba0b3f16f9fb2cb03c63f9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722736634710016617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54ced2ee-6e88-4087-9125-082c760a2089 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.604779982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5a70e6e-85d8-451f-88e1-8813de476f3c name=/runtime.v1.RuntimeService/Version
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.604860204Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5a70e6e-85d8-451f-88e1-8813de476f3c name=/runtime.v1.RuntimeService/Version
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.606660039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09f30016-2fdb-4091-8577-f574a6a155f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.607227032Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722737306607183694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09f30016-2fdb-4091-8577-f574a6a155f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.607775453Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5076bda4-34e8-47c0-869b-e51a1e9e4858 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.607851539Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5076bda4-34e8-47c0-869b-e51a1e9e4858 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.608643173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5779444b83313779cfe35c1c1e8cdbcb4dc33e22d1707d372e59a152713519,PodSandboxId:ac217b95bd857dd46870cb52cfe9a3af2dd715b40f766080eaa262deaeb87505,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722737091191312821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722737066493126365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175,PodSandboxId:7052ee9c14022804099b61be920796b7c44e7ce28fee5f05f3cc9dca0e05fa09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722737058307746152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7
a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f62ca3dc87b1d55d1e7581ef02b8a673ac64ef60b2a5b773b821dd8eb68e22,PodSandboxId:187b59dc0d2555548baf408a7377a3d6dc8012bbd166b49f4503798ecf22bfff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722737057988087887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},A
nnotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9,PodSandboxId:a69d4dc36c963700445f8ea55778c190b275dc1ff71c60228df1aadcb82a477f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722737057872022854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.ku
bernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738,PodSandboxId:12511f4c9f62542117eadbac185c1d4ac7f808f486a9a17d70683e6d0d95a2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722737057659260339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5,PodSandboxId:0b8d18faaf50fabb0f6e0f8eefb5f5a8dce93f3ade9bd44388f99eca0bee6e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722737057652976760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotations:map[string]string{io.kubernetes.container.hash: be863e
03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b,PodSandboxId:efd1f26e59a206be34098b25a32cacfc8cc4bddc577d1bf865e04732224c613b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722737057570464687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:map[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf,PodSandboxId:35cc64ddca94fb5b2044e4cdd2cd0d9da22b51749d10c5c3848bdf8a650f6478,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722737057453814004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722737053391526637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451755c9cae308862cc45dc834fd0544214391121372e8cfe19cb08fbc1e582f,PodSandboxId:9091e3232b4e4c61b5a0f7ca9d22dae51d7726484ce11102aed2f4f347a28d0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722736729040610772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e,PodSandboxId:2ff7b863562642710d449f303b9798cdb87b3a9cb80e48efaf9721781347fe4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722736669916495479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb,PodSandboxId:f14c29a7d94b4927bf72f76b367543d9a40f8181f1e07d9fdf876b83300ea60b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722736657937507958,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.kubernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6,PodSandboxId:f2e81613fe5ae2e71ee14f1b4d6fa5c59a00b1a2682ddd5fef092a507f507ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722736654134846162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e,PodSandboxId:5f06df713675f6bf928a9fc4849f46aa38d82f97b93ef78bc288760ae73d7f6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722736634756657985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a,PodSandboxId:7d2c2feafa63903e31519edfc8cf521d792380c3be4bae0ab6bc962b6509875f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722736634741132492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotation
s:map[string]string{io.kubernetes.container.hash: be863e03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b,PodSandboxId:4ba53ac02e903d556ba72f1d01291672d68cecf7e0a78fa1018c2aef70e094a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722736634791718840,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc,PodSandboxId:21908fae5b9cf1674a348dc5b96270ad7f0d1e7a0ba0b3f16f9fb2cb03c63f9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722736634710016617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5076bda4-34e8-47c0-869b-e51a1e9e4858 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.654761370Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7b953ec-dfb8-4576-9318-e086bef97aef name=/runtime.v1.RuntimeService/Version
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.654851404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7b953ec-dfb8-4576-9318-e086bef97aef name=/runtime.v1.RuntimeService/Version
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.655916499Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=148f4466-b64b-4816-bcdc-88b5757fdcaa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.656708488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722737306656683899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=148f4466-b64b-4816-bcdc-88b5757fdcaa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.657515596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3aa79f25-2334-47e8-9d31-f526ee588da0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.657570201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3aa79f25-2334-47e8-9d31-f526ee588da0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:08:26 multinode-229184 crio[2889]: time="2024-08-04 02:08:26.658319744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d5779444b83313779cfe35c1c1e8cdbcb4dc33e22d1707d372e59a152713519,PodSandboxId:ac217b95bd857dd46870cb52cfe9a3af2dd715b40f766080eaa262deaeb87505,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722737091191312821,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722737066493126365,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175,PodSandboxId:7052ee9c14022804099b61be920796b7c44e7ce28fee5f05f3cc9dca0e05fa09,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722737058307746152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7
a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f62ca3dc87b1d55d1e7581ef02b8a673ac64ef60b2a5b773b821dd8eb68e22,PodSandboxId:187b59dc0d2555548baf408a7377a3d6dc8012bbd166b49f4503798ecf22bfff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722737057988087887,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},A
nnotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9,PodSandboxId:a69d4dc36c963700445f8ea55778c190b275dc1ff71c60228df1aadcb82a477f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722737057872022854,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.ku
bernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738,PodSandboxId:12511f4c9f62542117eadbac185c1d4ac7f808f486a9a17d70683e6d0d95a2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722737057659260339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5,PodSandboxId:0b8d18faaf50fabb0f6e0f8eefb5f5a8dce93f3ade9bd44388f99eca0bee6e43,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722737057652976760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotations:map[string]string{io.kubernetes.container.hash: be863e
03,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b,PodSandboxId:efd1f26e59a206be34098b25a32cacfc8cc4bddc577d1bf865e04732224c613b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722737057570464687,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:map[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf,PodSandboxId:35cc64ddca94fb5b2044e4cdd2cd0d9da22b51749d10c5c3848bdf8a650f6478,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722737057453814004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c,PodSandboxId:914062abfbb2891de94c509b586fbd1f0d73ce4a634afa0631738551dc613f64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722737053391526637,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-s8kfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf584da9-583d-4aeb-9543-47388a20b06d,},Annotations:map[string]string{io.kubernetes.container.hash: a1873a20,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:451755c9cae308862cc45dc834fd0544214391121372e8cfe19cb08fbc1e582f,PodSandboxId:9091e3232b4e4c61b5a0f7ca9d22dae51d7726484ce11102aed2f4f347a28d0d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722736729040610772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jq4l7,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 88dc5b8c-6f06-4bf4-b8e9-9388b4018a10,},Annotations:map[string]string{io.kubernetes.container.hash: 8d763071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f8e8d602fa18409a11cbe8132097d4a17ecc86e819fc90e2c7a667932241e5e,PodSandboxId:2ff7b863562642710d449f303b9798cdb87b3a9cb80e48efaf9721781347fe4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722736669916495479,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14a14d46-fda3-41ed-9ef2-d2a54615cc0e,},Annotations:map[string]string{io.kubernetes.container.hash: 92643ded,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb,PodSandboxId:f14c29a7d94b4927bf72f76b367543d9a40f8181f1e07d9fdf876b83300ea60b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722736657937507958,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-85878,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 263f3468-8f44-46ac-adc1-3daab3d99200,},Annotations:map[string]string{io.kubernetes.container.hash: ebdc1537,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6,PodSandboxId:f2e81613fe5ae2e71ee14f1b4d6fa5c59a00b1a2682ddd5fef092a507f507ac4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722736654134846162,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cnd2r,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 92c92b5d-bd0b-41d0-810e-66e7a4d0097e,},Annotations:map[string]string{io.kubernetes.container.hash: 75c5c142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e,PodSandboxId:5f06df713675f6bf928a9fc4849f46aa38d82f97b93ef78bc288760ae73d7f6d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722736634756657985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
661af473af1219f5106110b5354791ac,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a,PodSandboxId:7d2c2feafa63903e31519edfc8cf521d792380c3be4bae0ab6bc962b6509875f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722736634741132492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65ed41416f735fd1bd68a5690b2cfe4,},Annotation
s:map[string]string{io.kubernetes.container.hash: be863e03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b,PodSandboxId:4ba53ac02e903d556ba72f1d01291672d68cecf7e0a78fa1018c2aef70e094a7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722736634791718840,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df09d9431e5f7bf804b7cbd24a37d103,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc,PodSandboxId:21908fae5b9cf1674a348dc5b96270ad7f0d1e7a0ba0b3f16f9fb2cb03c63f9c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722736634710016617,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229184,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbb7ae03d08ea4eda37b6250ac5d1e79,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2c4a264f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3aa79f25-2334-47e8-9d31-f526ee588da0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d5779444b833       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   ac217b95bd857       busybox-fc5497c4f-jq4l7
	069c0ab9ae296       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   914062abfbb28       coredns-7db6d8ff4d-s8kfn
	de2d1594af7cd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   7052ee9c14022       kube-proxy-cnd2r
	24f62ca3dc87b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   187b59dc0d255       storage-provisioner
	fbac06e11821b       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   a69d4dc36c963       kindnet-85878
	43c8f76c44337       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   12511f4c9f625       kube-controller-manager-multinode-229184
	c90d734c1552d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   0b8d18faaf50f       etcd-multinode-229184
	768428b12453d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   efd1f26e59a20       kube-apiserver-multinode-229184
	2d83f840bcd2d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   35cc64ddca94f       kube-scheduler-multinode-229184
	b7d560c128154       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   914062abfbb28       coredns-7db6d8ff4d-s8kfn
	451755c9cae30       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   9091e3232b4e4       busybox-fc5497c4f-jq4l7
	0f8e8d602fa18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   2ff7b86356264       storage-provisioner
	68dc307aba765       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   f14c29a7d94b4       kindnet-85878
	3eb91b14876af       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   f2e81613fe5ae       kube-proxy-cnd2r
	bcdd0c1a35983       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   4ba53ac02e903       kube-controller-manager-multinode-229184
	997af80342f16       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   5f06df713675f       kube-scheduler-multinode-229184
	b7c7ca7827fb9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   7d2c2feafa639       etcd-multinode-229184
	f19c91e30619a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   21908fae5b9cf       kube-apiserver-multinode-229184
	
	
	==> coredns [069c0ab9ae296363b4ddb5a6ae98d8f4b00cb3049f4a3850837b9b79dd2a1260] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43518 - 52193 "HINFO IN 6497034716295087957.8583571617719234661. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0144819s
	
	
	==> coredns [b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47718 - 54294 "HINFO IN 7286318690051177686.7691211596556706498. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015977015s
	
	
	==> describe nodes <==
	Name:               multinode-229184
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229184
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=multinode-229184
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_04T01_57_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 01:57:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229184
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 02:08:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 02:04:25 +0000   Sun, 04 Aug 2024 01:57:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 02:04:25 +0000   Sun, 04 Aug 2024 01:57:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 02:04:25 +0000   Sun, 04 Aug 2024 01:57:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 02:04:25 +0000   Sun, 04 Aug 2024 01:57:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    multinode-229184
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc307d996e9243b285c82774ea0fb47c
	  System UUID:                dc307d99-6e92-43b2-85c8-2774ea0fb47c
	  Boot ID:                    603f0dbd-bdd0-4a81-80ff-c63c2f5b26f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jq4l7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m41s
	  kube-system                 coredns-7db6d8ff4d-s8kfn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-229184                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-85878                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-229184             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-229184    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-cnd2r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-229184             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 10m    kube-proxy       
	  Normal  Starting                 4m6s   kube-proxy       
	  Normal  NodeAllocatableEnforced  11m    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m    kubelet          Node multinode-229184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m    kubelet          Node multinode-229184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m    kubelet          Node multinode-229184 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m    node-controller  Node multinode-229184 event: Registered Node multinode-229184 in Controller
	  Normal  NodeReady                10m    kubelet          Node multinode-229184 status is now: NodeReady
	  Normal  Starting                 4m1s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m1s   kubelet          Node multinode-229184 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s   kubelet          Node multinode-229184 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s   kubelet          Node multinode-229184 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s  node-controller  Node multinode-229184 event: Registered Node multinode-229184 in Controller
	
	
	Name:               multinode-229184-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229184-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=40edfa2c305b426121bc436e9f593c10662235e6
	                    minikube.k8s.io/name=multinode-229184
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_04T02_05_02_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 02:05:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229184-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 02:06:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 04 Aug 2024 02:05:33 +0000   Sun, 04 Aug 2024 02:06:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 04 Aug 2024 02:05:33 +0000   Sun, 04 Aug 2024 02:06:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 04 Aug 2024 02:05:33 +0000   Sun, 04 Aug 2024 02:06:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 04 Aug 2024 02:05:33 +0000   Sun, 04 Aug 2024 02:06:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    multinode-229184-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbef89f3c761447ea37b3747483f1a85
	  System UUID:                cbef89f3-c761-447e-a37b-3747483f1a85
	  Boot ID:                    f0030be3-092a-4ccb-842e-a557f03824f4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mccck    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-v7wgl              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-jfj5c           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m59s                  kube-proxy       
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-229184-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-229184-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-229184-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeReady                9m43s                  kubelet          Node multinode-229184-m02 status is now: NodeReady
	  Normal  Starting                 3m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-229184-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-229184-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-229184-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-229184-m02 status is now: NodeReady
	  Normal  NodeNotReady             98s                    node-controller  Node multinode-229184-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.170549] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.168857] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.282844] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +4.307765] systemd-fstab-generator[771]: Ignoring "noauto" option for root device
	[  +0.057105] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.539026] systemd-fstab-generator[957]: Ignoring "noauto" option for root device
	[  +0.503504] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.546567] systemd-fstab-generator[1303]: Ignoring "noauto" option for root device
	[  +0.075897] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.205582] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.450535] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[  +5.351809] kauditd_printk_skb: 56 callbacks suppressed
	[Aug 4 01:58] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 4 02:04] systemd-fstab-generator[2808]: Ignoring "noauto" option for root device
	[  +0.148867] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.170780] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[  +0.140751] systemd-fstab-generator[2846]: Ignoring "noauto" option for root device
	[  +0.288346] systemd-fstab-generator[2874]: Ignoring "noauto" option for root device
	[  +1.349100] systemd-fstab-generator[2974]: Ignoring "noauto" option for root device
	[  +4.562732] kauditd_printk_skb: 132 callbacks suppressed
	[  +7.564648] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	[  +0.106092] kauditd_printk_skb: 62 callbacks suppressed
	[  +8.550367] kauditd_printk_skb: 19 callbacks suppressed
	[  +2.918495] systemd-fstab-generator[4007]: Ignoring "noauto" option for root device
	[ +14.642146] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [b7c7ca7827fb9dd3469400c0ae489b1b5400c098f1b62956a8fbf5183266aa5a] <==
	{"level":"info","ts":"2024-08-04T01:58:26.620166Z","caller":"traceutil/trace.go:171","msg":"trace[1185917055] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:485; }","duration":"245.795882ms","start":"2024-08-04T01:58:26.374359Z","end":"2024-08-04T01:58:26.620155Z","steps":["trace[1185917055] 'agreement among raft nodes before linearized reading'  (duration: 245.553535ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T01:59:23.337707Z","caller":"traceutil/trace.go:171","msg":"trace[1902374710] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"301.442443ms","start":"2024-08-04T01:59:23.03624Z","end":"2024-08-04T01:59:23.337683Z","steps":["trace[1902374710] 'process raft request'  (duration: 301.095742ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T01:59:23.338784Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T01:59:23.03622Z","time spent":"301.974674ms","remote":"127.0.0.1:47674","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":925,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-dsv65\" mod_revision:0 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-dsv65\" value_size:871 >> failure:<>"}
	{"level":"warn","ts":"2024-08-04T01:59:23.726457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.455292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4097872256555623048 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-dsv65\" mod_revision:591 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-dsv65\" value_size:2296 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-dsv65\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-04T01:59:23.726642Z","caller":"traceutil/trace.go:171","msg":"trace[604818588] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"318.123459ms","start":"2024-08-04T01:59:23.408506Z","end":"2024-08-04T01:59:23.72663Z","steps":["trace[604818588] 'process raft request'  (duration: 138.180768ms)","trace[604818588] 'compare'  (duration: 179.071415ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-04T01:59:23.726727Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-04T01:59:23.40849Z","time spent":"318.200448ms","remote":"127.0.0.1:47674","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2350,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-dsv65\" mod_revision:591 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-dsv65\" value_size:2296 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-dsv65\" > >"}
	{"level":"warn","ts":"2024-08-04T01:59:24.010545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.046613ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4097872256555623052 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:38de911b1aed228b>","response":"size:41"}
	{"level":"info","ts":"2024-08-04T01:59:24.011021Z","caller":"traceutil/trace.go:171","msg":"trace[788522474] linearizableReadLoop","detail":"{readStateIndex:633; appliedIndex:631; }","duration":"204.538095ms","start":"2024-08-04T01:59:23.806471Z","end":"2024-08-04T01:59:24.011009Z","steps":["trace[788522474] 'read index received'  (duration: 51.942023ms)","trace[788522474] 'applied index is now lower than readState.Index'  (duration: 152.595282ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-04T01:59:24.011252Z","caller":"traceutil/trace.go:171","msg":"trace[65515223] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"204.847388ms","start":"2024-08-04T01:59:23.806396Z","end":"2024-08-04T01:59:24.011244Z","steps":["trace[65515223] 'process raft request'  (duration: 204.379021ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T01:59:24.011339Z","caller":"traceutil/trace.go:171","msg":"trace[1948321475] transaction","detail":"{read_only:false; number_of_response:1; response_revision:594; }","duration":"203.900075ms","start":"2024-08-04T01:59:23.807434Z","end":"2024-08-04T01:59:24.011334Z","steps":["trace[1948321475] 'process raft request'  (duration: 203.440411ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T01:59:24.011444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.965169ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-229184-m03\" ","response":"range_response_count:1 size:2039"}
	{"level":"info","ts":"2024-08-04T01:59:24.011484Z","caller":"traceutil/trace.go:171","msg":"trace[1463434038] range","detail":"{range_begin:/registry/minions/multinode-229184-m03; range_end:; response_count:1; response_revision:594; }","duration":"205.022808ms","start":"2024-08-04T01:59:23.80645Z","end":"2024-08-04T01:59:24.011472Z","steps":["trace[1463434038] 'agreement among raft nodes before linearized reading'  (duration: 204.968511ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-04T01:59:24.011804Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.310046ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-04T01:59:24.011827Z","caller":"traceutil/trace.go:171","msg":"trace[1135990474] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:595; }","duration":"205.35086ms","start":"2024-08-04T01:59:23.806469Z","end":"2024-08-04T01:59:24.01182Z","steps":["trace[1135990474] 'agreement among raft nodes before linearized reading'  (duration: 205.295293ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T01:59:27.769389Z","caller":"traceutil/trace.go:171","msg":"trace[859937074] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"133.704395ms","start":"2024-08-04T01:59:27.635663Z","end":"2024-08-04T01:59:27.769367Z","steps":["trace[859937074] 'process raft request'  (duration: 132.751509ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-04T02:02:39.454925Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-04T02:02:39.457981Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-229184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.183:2380"],"advertise-client-urls":["https://192.168.39.183:2379"]}
	{"level":"warn","ts":"2024-08-04T02:02:39.458182Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.183:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T02:02:39.458237Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.183:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T02:02:39.458364Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T02:02:39.458441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T02:02:39.551011Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f87838631c8138de","current-leader-member-id":"f87838631c8138de"}
	{"level":"info","ts":"2024-08-04T02:02:39.554185Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-08-04T02:02:39.554381Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-08-04T02:02:39.554429Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-229184","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.183:2380"],"advertise-client-urls":["https://192.168.39.183:2379"]}
	
	
	==> etcd [c90d734c1552da3017051e95c1f45bf53effc28e71873634cdfa04ff030353b5] <==
	{"level":"info","ts":"2024-08-04T02:04:18.192366Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T02:04:18.192373Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-04T02:04:18.19262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de switched to configuration voters=(17904122316942555358)"}
	{"level":"info","ts":"2024-08-04T02:04:18.192666Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dc4003dc2fbf749","local-member-id":"f87838631c8138de","added-peer-id":"f87838631c8138de","added-peer-peer-urls":["https://192.168.39.183:2380"]}
	{"level":"info","ts":"2024-08-04T02:04:18.192759Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dc4003dc2fbf749","local-member-id":"f87838631c8138de","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T02:04:18.192781Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T02:04:18.207399Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T02:04:18.233317Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f87838631c8138de","initial-advertise-peer-urls":["https://192.168.39.183:2380"],"listen-peer-urls":["https://192.168.39.183:2380"],"advertise-client-urls":["https://192.168.39.183:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.183:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T02:04:18.239315Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-08-04T02:04:18.239342Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.183:2380"}
	{"level":"info","ts":"2024-08-04T02:04:18.239351Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T02:04:19.376155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-04T02:04:19.3762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T02:04:19.37623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de received MsgPreVoteResp from f87838631c8138de at term 2"}
	{"level":"info","ts":"2024-08-04T02:04:19.376248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T02:04:19.376255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de received MsgVoteResp from f87838631c8138de at term 3"}
	{"level":"info","ts":"2024-08-04T02:04:19.376263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f87838631c8138de became leader at term 3"}
	{"level":"info","ts":"2024-08-04T02:04:19.376272Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f87838631c8138de elected leader f87838631c8138de at term 3"}
	{"level":"info","ts":"2024-08-04T02:04:19.378132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T02:04:19.378138Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f87838631c8138de","local-member-attributes":"{Name:multinode-229184 ClientURLs:[https://192.168.39.183:2379]}","request-path":"/0/members/f87838631c8138de/attributes","cluster-id":"2dc4003dc2fbf749","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T02:04:19.37873Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T02:04:19.378925Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T02:04:19.378958Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T02:04:19.380221Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T02:04:19.380605Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.183:2379"}
	
	
	==> kernel <==
	 02:08:27 up 11 min,  0 users,  load average: 0.04, 0.20, 0.17
	Linux multinode-229184 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [68dc307aba765932e77e5f1ae5c6b7c29a9de60043089d9b19f124d2ab89a7cb] <==
	I0804 02:01:58.903555       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:08.902457       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:02:08.902623       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:08.902826       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:02:08.902872       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	I0804 02:02:08.902940       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:02:08.902960       1 main.go:299] handling current node
	I0804 02:02:18.902945       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:02:18.903124       1 main.go:299] handling current node
	I0804 02:02:18.903159       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:02:18.903178       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:18.903323       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:02:18.903345       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	I0804 02:02:28.900987       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:02:28.901185       1 main.go:299] handling current node
	I0804 02:02:28.901214       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:02:28.901251       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:28.901428       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:02:28.901451       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	I0804 02:02:38.894520       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:02:38.894583       1 main.go:299] handling current node
	I0804 02:02:38.894598       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:02:38.894637       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:02:38.894757       1 main.go:295] Handling node with IPs: map[192.168.39.152:{}]
	I0804 02:02:38.894763       1 main.go:322] Node multinode-229184-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [fbac06e11821b815adaa55068682b36f15adab78eafb3d79a8f46ca919ee51f9] <==
	I0804 02:07:18.910213       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:07:28.915985       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:07:28.916090       1 main.go:299] handling current node
	I0804 02:07:28.916105       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:07:28.916111       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:07:38.911134       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:07:38.911202       1 main.go:299] handling current node
	I0804 02:07:38.911233       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:07:38.911241       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:07:48.917685       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:07:48.917788       1 main.go:299] handling current node
	I0804 02:07:48.917817       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:07:48.917835       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:07:58.909924       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:07:58.910339       1 main.go:299] handling current node
	I0804 02:07:58.910399       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:07:58.910422       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:08:08.918691       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:08:08.918761       1 main.go:299] handling current node
	I0804 02:08:08.918786       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:08:08.918810       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:08:18.909970       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0804 02:08:18.910140       1 main.go:322] Node multinode-229184-m02 has CIDR [10.244.1.0/24] 
	I0804 02:08:18.910256       1 main.go:295] Handling node with IPs: map[192.168.39.183:{}]
	I0804 02:08:18.910263       1 main.go:299] handling current node
	
	
	==> kube-apiserver [768428b12453d5a476852615a77bf6f26f1631708cf938688de7252f96320a5b] <==
	I0804 02:04:20.698253       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 02:04:20.698350       1 policy_source.go:224] refreshing policies
	I0804 02:04:20.714345       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 02:04:20.714457       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0804 02:04:20.714501       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0804 02:04:20.723548       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 02:04:20.735776       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 02:04:20.736346       1 aggregator.go:165] initial CRD sync complete...
	I0804 02:04:20.736422       1 autoregister_controller.go:141] Starting autoregister controller
	I0804 02:04:20.736448       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 02:04:20.736471       1 cache.go:39] Caches are synced for autoregister controller
	I0804 02:04:20.749368       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0804 02:04:20.750190       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 02:04:20.750394       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0804 02:04:20.767891       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 02:04:20.795673       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0804 02:04:20.844191       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0804 02:04:21.620305       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 02:04:25.790743       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 02:04:25.919938       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 02:04:25.931026       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 02:04:25.999923       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 02:04:26.009760       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0804 02:04:33.540509       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0804 02:04:33.736162       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [f19c91e30619a643610f8f9706293316fe93ef24381aae6f139f778934eb06cc] <==
	W0804 02:02:39.501861       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.501917       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.501973       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502266       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502470       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502646       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502707       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502764       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502819       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.502993       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.503126       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.503384       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504016       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504784       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504879       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504939       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.504995       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 02:02:39.505273       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0804 02:02:39.505309       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	W0804 02:02:39.505376       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505441       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505499       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505565       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505619       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:02:39.505674       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [43c8f76c4433747e8df4dc2a7f02ec7a21e1c7b7488e08495b3e7b2581301738] <==
	I0804 02:05:02.555207       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m02\" does not exist"
	I0804 02:05:02.568595       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m02" podCIDRs=["10.244.1.0/24"]
	I0804 02:05:03.453767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.322µs"
	I0804 02:05:03.468436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.151µs"
	I0804 02:05:03.480616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.952µs"
	I0804 02:05:03.527577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.808µs"
	I0804 02:05:03.537205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.826µs"
	I0804 02:05:03.542160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.855µs"
	I0804 02:05:21.303400       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:05:21.325218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="182.129µs"
	I0804 02:05:21.339534       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.048µs"
	I0804 02:05:24.912637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.499203ms"
	I0804 02:05:24.912739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.301µs"
	I0804 02:05:39.643920       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:05:40.871516       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m03\" does not exist"
	I0804 02:05:40.871569       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:05:40.879268       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m03" podCIDRs=["10.244.2.0/24"]
	I0804 02:05:59.868350       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:06:05.542620       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:06:48.654649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.590058ms"
	I0804 02:06:48.655205       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="151.552µs"
	I0804 02:06:53.526199       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mhj4m"
	I0804 02:06:53.547345       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mhj4m"
	I0804 02:06:53.547417       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-24bvr"
	I0804 02:06:53.596266       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-24bvr"
	
	
	==> kube-controller-manager [bcdd0c1a3598303c4e77258c06344e8c4658ee0463b7406cffccd9316526d89b] <==
	I0804 01:57:52.589191       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0804 01:58:22.587719       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m02\" does not exist"
	I0804 01:58:22.595256       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-229184-m02"
	I0804 01:58:22.604879       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m02" podCIDRs=["10.244.1.0/24"]
	I0804 01:58:43.328748       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 01:58:45.690480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.313165ms"
	I0804 01:58:45.715949       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.334ms"
	I0804 01:58:45.716029       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.721µs"
	I0804 01:58:49.250832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.256537ms"
	I0804 01:58:49.250931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.021µs"
	I0804 01:58:49.585712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.493314ms"
	I0804 01:58:49.586251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.808µs"
	I0804 01:59:23.796685       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m03\" does not exist"
	I0804 01:59:23.796768       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 01:59:24.029629       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m03" podCIDRs=["10.244.2.0/24"]
	I0804 01:59:27.620028       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-229184-m03"
	I0804 01:59:43.446385       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m03"
	I0804 02:00:12.970277       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:00:14.004944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:00:14.005595       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229184-m03\" does not exist"
	I0804 02:00:14.016835       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-229184-m03" podCIDRs=["10.244.3.0/24"]
	I0804 02:00:33.686469       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:01:17.678706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-229184-m02"
	I0804 02:01:17.734206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.604098ms"
	I0804 02:01:17.734397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.523µs"
	
	
	==> kube-proxy [3eb91b14876af2b34ebc406da46dc365f83758c625701931061b6dbfe58540c6] <==
	I0804 01:57:34.492134       1 server_linux.go:69] "Using iptables proxy"
	I0804 01:57:34.507984       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	I0804 01:57:34.554644       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 01:57:34.554727       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 01:57:34.554747       1 server_linux.go:165] "Using iptables Proxier"
	I0804 01:57:34.559399       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 01:57:34.559949       1 server.go:872] "Version info" version="v1.30.3"
	I0804 01:57:34.560222       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 01:57:34.561885       1 config.go:192] "Starting service config controller"
	I0804 01:57:34.563657       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 01:57:34.562151       1 config.go:319] "Starting node config controller"
	I0804 01:57:34.564735       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 01:57:34.563470       1 config.go:101] "Starting endpoint slice config controller"
	I0804 01:57:34.564838       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 01:57:34.665132       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 01:57:34.665192       1 shared_informer.go:320] Caches are synced for node config
	I0804 01:57:34.665149       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [de2d1594af7cd3c12773240c3fe3366ff159d07596b0b296698ca0b8bb4ad175] <==
	I0804 02:04:19.194499       1 server_linux.go:69] "Using iptables proxy"
	I0804 02:04:20.786638       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.183"]
	I0804 02:04:20.922176       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0804 02:04:20.922259       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 02:04:20.922296       1 server_linux.go:165] "Using iptables Proxier"
	I0804 02:04:20.927001       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0804 02:04:20.927347       1 server.go:872] "Version info" version="v1.30.3"
	I0804 02:04:20.927389       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 02:04:20.929579       1 config.go:192] "Starting service config controller"
	I0804 02:04:20.929628       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 02:04:20.929666       1 config.go:101] "Starting endpoint slice config controller"
	I0804 02:04:20.929671       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 02:04:20.932701       1 config.go:319] "Starting node config controller"
	I0804 02:04:20.932737       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 02:04:21.030748       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0804 02:04:21.030846       1 shared_informer.go:320] Caches are synced for service config
	I0804 02:04:21.033494       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2d83f840bcd2d93c86d62a7869ed34e8b8618749a082b07f9df539bf6227adaf] <==
	I0804 02:04:18.662883       1 serving.go:380] Generated self-signed cert in-memory
	W0804 02:04:20.657131       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 02:04:20.657520       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 02:04:20.657616       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 02:04:20.657641       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 02:04:20.767736       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0804 02:04:20.768659       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 02:04:20.775946       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 02:04:20.776185       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 02:04:20.779412       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 02:04:20.776205       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0804 02:04:20.879702       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [997af80342f16b71780e1bae7dc942c79cdb024de648089f51d51bb67834811e] <==
	E0804 01:57:18.124644       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0804 01:57:18.138930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0804 01:57:18.138980       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0804 01:57:18.238680       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0804 01:57:18.238732       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0804 01:57:18.249001       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0804 01:57:18.249213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0804 01:57:18.279185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0804 01:57:18.279684       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0804 01:57:18.304311       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0804 01:57:18.304358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0804 01:57:18.413774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0804 01:57:18.413881       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0804 01:57:18.428486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0804 01:57:18.428542       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0804 01:57:18.432278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0804 01:57:18.432409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0804 01:57:18.512975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0804 01:57:18.513126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0804 01:57:18.527856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0804 01:57:18.528129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0804 01:57:18.606693       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0804 01:57:18.606725       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0804 01:57:21.013971       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0804 02:02:39.465927       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.258084    3846 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/263f3468-8f44-46ac-adc1-3daab3d99200-cni-cfg\") pod \"kindnet-85878\" (UID: \"263f3468-8f44-46ac-adc1-3daab3d99200\") " pod="kube-system/kindnet-85878"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: E0804 02:04:26.440029    3846 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-229184\" already exists" pod="kube-system/kube-controller-manager-multinode-229184"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: E0804 02:04:26.441240    3846 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-229184\" already exists" pod="kube-system/kube-apiserver-multinode-229184"
	Aug 04 02:04:26 multinode-229184 kubelet[3846]: I0804 02:04:26.464469    3846 scope.go:117] "RemoveContainer" containerID="b7d560c128154c1bbf0ba2e523fc06442e33f9006dc857eea0122e4f6916c39c"
	Aug 04 02:04:33 multinode-229184 kubelet[3846]: I0804 02:04:33.394955    3846 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 04 02:05:25 multinode-229184 kubelet[3846]: E0804 02:05:25.306269    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 02:05:25 multinode-229184 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 02:05:25 multinode-229184 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 02:05:25 multinode-229184 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 02:05:25 multinode-229184 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 02:06:25 multinode-229184 kubelet[3846]: E0804 02:06:25.308093    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 02:06:25 multinode-229184 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 02:06:25 multinode-229184 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 02:06:25 multinode-229184 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 02:06:25 multinode-229184 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 02:07:25 multinode-229184 kubelet[3846]: E0804 02:07:25.306038    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 02:07:25 multinode-229184 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 02:07:25 multinode-229184 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 02:07:25 multinode-229184 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 02:07:25 multinode-229184 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 02:08:25 multinode-229184 kubelet[3846]: E0804 02:08:25.306146    3846 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 02:08:25 multinode-229184 kubelet[3846]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 02:08:25 multinode-229184 kubelet[3846]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 02:08:25 multinode-229184 kubelet[3846]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 02:08:25 multinode-229184 kubelet[3846]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 02:08:26.237235  132665 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19364-90243/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-229184 -n multinode-229184
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-229184 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.38s)

                                                
                                    
x
+
TestPreload (274.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-016167 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-016167 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m11.574074614s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-016167 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-016167 image pull gcr.io/k8s-minikube/busybox: (3.021098422s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-016167
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-016167: exit status 82 (2m0.488525314s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-016167"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-016167 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-04 02:16:33.267170286 +0000 UTC m=+5672.084679011
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-016167 -n test-preload-016167
E0804 02:16:42.266180   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-016167 -n test-preload-016167: exit status 3 (18.482450349s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0804 02:16:51.745718  135582 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E0804 02:16:51.745743  135582 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-016167" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-016167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-016167
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-016167: (1.110455706s)
--- FAIL: TestPreload (274.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (418.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168045 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-168045 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m0.431109956s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-168045] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-168045" primary control-plane node in "kubernetes-upgrade-168045" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 02:18:46.735403  137098 out.go:291] Setting OutFile to fd 1 ...
	I0804 02:18:46.735522  137098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:18:46.735531  137098 out.go:304] Setting ErrFile to fd 2...
	I0804 02:18:46.735536  137098 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:18:46.735718  137098 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 02:18:46.736312  137098 out.go:298] Setting JSON to false
	I0804 02:18:46.737235  137098 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14471,"bootTime":1722723456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 02:18:46.737300  137098 start.go:139] virtualization: kvm guest
	I0804 02:18:46.738985  137098 out.go:177] * [kubernetes-upgrade-168045] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 02:18:46.740816  137098 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 02:18:46.740858  137098 notify.go:220] Checking for updates...
	I0804 02:18:46.743703  137098 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 02:18:46.746353  137098 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 02:18:46.747551  137098 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:18:46.748840  137098 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 02:18:46.750416  137098 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 02:18:46.751928  137098 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 02:18:46.787005  137098 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 02:18:46.788505  137098 start.go:297] selected driver: kvm2
	I0804 02:18:46.788520  137098 start.go:901] validating driver "kvm2" against <nil>
	I0804 02:18:46.788533  137098 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 02:18:46.789519  137098 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:18:49.141853  137098 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 02:18:49.159628  137098 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 02:18:49.159694  137098 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 02:18:49.159937  137098 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 02:18:49.159962  137098 cni.go:84] Creating CNI manager for ""
	I0804 02:18:49.159970  137098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 02:18:49.159977  137098 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 02:18:49.160054  137098 start.go:340] cluster config:
	{Name:kubernetes-upgrade-168045 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-168045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:18:49.160161  137098 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:18:49.161976  137098 out.go:177] * Starting "kubernetes-upgrade-168045" primary control-plane node in "kubernetes-upgrade-168045" cluster
	I0804 02:18:49.163428  137098 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 02:18:49.163463  137098 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0804 02:18:49.163478  137098 cache.go:56] Caching tarball of preloaded images
	I0804 02:18:49.163566  137098 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 02:18:49.163577  137098 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0804 02:18:49.163912  137098 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/config.json ...
	I0804 02:18:49.163936  137098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/config.json: {Name:mk98840ad9ddcb93c3ed86eb27ab92c390e75ad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:18:49.164064  137098 start.go:360] acquireMachinesLock for kubernetes-upgrade-168045: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 02:19:14.934508  137098 start.go:364] duration metric: took 25.770416286s to acquireMachinesLock for "kubernetes-upgrade-168045"
	I0804 02:19:14.934579  137098 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-168045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-168045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 02:19:14.934690  137098 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 02:19:14.937036  137098 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0804 02:19:14.937294  137098 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:19:14.937350  137098 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:19:14.955714  137098 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0804 02:19:14.956216  137098 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:19:14.956973  137098 main.go:141] libmachine: Using API Version  1
	I0804 02:19:14.957003  137098 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:19:14.957511  137098 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:19:14.957722  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetMachineName
	I0804 02:19:14.957900  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:19:14.958112  137098 start.go:159] libmachine.API.Create for "kubernetes-upgrade-168045" (driver="kvm2")
	I0804 02:19:14.958142  137098 client.go:168] LocalClient.Create starting
	I0804 02:19:14.958220  137098 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem
	I0804 02:19:14.958272  137098 main.go:141] libmachine: Decoding PEM data...
	I0804 02:19:14.958289  137098 main.go:141] libmachine: Parsing certificate...
	I0804 02:19:14.958356  137098 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem
	I0804 02:19:14.958374  137098 main.go:141] libmachine: Decoding PEM data...
	I0804 02:19:14.958385  137098 main.go:141] libmachine: Parsing certificate...
	I0804 02:19:14.958401  137098 main.go:141] libmachine: Running pre-create checks...
	I0804 02:19:14.958413  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .PreCreateCheck
	I0804 02:19:14.958752  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetConfigRaw
	I0804 02:19:14.959157  137098 main.go:141] libmachine: Creating machine...
	I0804 02:19:14.959171  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .Create
	I0804 02:19:14.959307  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Creating KVM machine...
	I0804 02:19:14.960664  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found existing default KVM network
	I0804 02:19:14.961852  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:14.961650  137457 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:17:7f} reservation:<nil>}
	I0804 02:19:14.962752  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:14.962658  137457 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I0804 02:19:14.962798  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | created network xml: 
	I0804 02:19:14.962817  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | <network>
	I0804 02:19:14.962829  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG |   <name>mk-kubernetes-upgrade-168045</name>
	I0804 02:19:14.962851  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG |   <dns enable='no'/>
	I0804 02:19:14.962864  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG |   
	I0804 02:19:14.962873  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0804 02:19:14.962892  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG |     <dhcp>
	I0804 02:19:14.962921  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0804 02:19:14.962933  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG |     </dhcp>
	I0804 02:19:14.962940  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG |   </ip>
	I0804 02:19:14.962945  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG |   
	I0804 02:19:14.962952  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | </network>
	I0804 02:19:14.962959  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | 
	I0804 02:19:14.968875  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | trying to create private KVM network mk-kubernetes-upgrade-168045 192.168.50.0/24...
	I0804 02:19:15.039994  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | private KVM network mk-kubernetes-upgrade-168045 192.168.50.0/24 created
	I0804 02:19:15.040066  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Setting up store path in /home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045 ...
	I0804 02:19:15.040088  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:15.039961  137457 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:19:15.040106  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Building disk image from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 02:19:15.040147  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Downloading /home/jenkins/minikube-integration/19364-90243/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 02:19:15.305146  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:15.304955  137457 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/id_rsa...
	I0804 02:19:15.459492  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:15.459330  137457 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/kubernetes-upgrade-168045.rawdisk...
	I0804 02:19:15.459537  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Writing magic tar header
	I0804 02:19:15.459559  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Writing SSH key tar header
	I0804 02:19:15.459574  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:15.459475  137457 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045 ...
	I0804 02:19:15.459636  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045
	I0804 02:19:15.459659  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045 (perms=drwx------)
	I0804 02:19:15.459676  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines
	I0804 02:19:15.459689  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:19:15.459701  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines (perms=drwxr-xr-x)
	I0804 02:19:15.459713  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243
	I0804 02:19:15.459746  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube (perms=drwxr-xr-x)
	I0804 02:19:15.459761  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 02:19:15.459774  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Checking permissions on dir: /home/jenkins
	I0804 02:19:15.459787  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Checking permissions on dir: /home
	I0804 02:19:15.459802  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243 (perms=drwxrwxr-x)
	I0804 02:19:15.459842  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 02:19:15.459884  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 02:19:15.459898  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Skipping /home - not owner
	I0804 02:19:15.459910  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Creating domain...
	I0804 02:19:15.461277  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) define libvirt domain using xml: 
	I0804 02:19:15.461306  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) <domain type='kvm'>
	I0804 02:19:15.461320  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   <name>kubernetes-upgrade-168045</name>
	I0804 02:19:15.461340  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   <memory unit='MiB'>2200</memory>
	I0804 02:19:15.461371  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   <vcpu>2</vcpu>
	I0804 02:19:15.461386  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   <features>
	I0804 02:19:15.461395  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <acpi/>
	I0804 02:19:15.461412  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <apic/>
	I0804 02:19:15.461427  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <pae/>
	I0804 02:19:15.461437  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     
	I0804 02:19:15.461451  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   </features>
	I0804 02:19:15.461469  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   <cpu mode='host-passthrough'>
	I0804 02:19:15.461483  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   
	I0804 02:19:15.461491  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   </cpu>
	I0804 02:19:15.461507  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   <os>
	I0804 02:19:15.461515  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <type>hvm</type>
	I0804 02:19:15.461524  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <boot dev='cdrom'/>
	I0804 02:19:15.461543  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <boot dev='hd'/>
	I0804 02:19:15.461557  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <bootmenu enable='no'/>
	I0804 02:19:15.461565  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   </os>
	I0804 02:19:15.461578  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   <devices>
	I0804 02:19:15.461587  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <disk type='file' device='cdrom'>
	I0804 02:19:15.461607  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/boot2docker.iso'/>
	I0804 02:19:15.461624  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <target dev='hdc' bus='scsi'/>
	I0804 02:19:15.461639  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <readonly/>
	I0804 02:19:15.461656  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     </disk>
	I0804 02:19:15.461671  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <disk type='file' device='disk'>
	I0804 02:19:15.461685  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 02:19:15.461727  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/kubernetes-upgrade-168045.rawdisk'/>
	I0804 02:19:15.461754  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <target dev='hda' bus='virtio'/>
	I0804 02:19:15.461765  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     </disk>
	I0804 02:19:15.461786  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <interface type='network'>
	I0804 02:19:15.461800  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <source network='mk-kubernetes-upgrade-168045'/>
	I0804 02:19:15.461809  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <model type='virtio'/>
	I0804 02:19:15.461817  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     </interface>
	I0804 02:19:15.461839  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <interface type='network'>
	I0804 02:19:15.461852  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <source network='default'/>
	I0804 02:19:15.461866  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <model type='virtio'/>
	I0804 02:19:15.461889  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     </interface>
	I0804 02:19:15.461899  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <serial type='pty'>
	I0804 02:19:15.461908  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <target port='0'/>
	I0804 02:19:15.461918  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     </serial>
	I0804 02:19:15.461927  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <console type='pty'>
	I0804 02:19:15.461938  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <target type='serial' port='0'/>
	I0804 02:19:15.461947  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     </console>
	I0804 02:19:15.461963  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     <rng model='virtio'>
	I0804 02:19:15.461987  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)       <backend model='random'>/dev/random</backend>
	I0804 02:19:15.462023  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     </rng>
	I0804 02:19:15.462048  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     
	I0804 02:19:15.462060  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)     
	I0804 02:19:15.462070  137098 main.go:141] libmachine: (kubernetes-upgrade-168045)   </devices>
	I0804 02:19:15.462078  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) </domain>
	I0804 02:19:15.462091  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) 
	I0804 02:19:15.466655  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:be:81:79 in network default
	I0804 02:19:15.467545  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:15.467567  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Ensuring networks are active...
	I0804 02:19:15.468420  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Ensuring network default is active
	I0804 02:19:15.468796  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Ensuring network mk-kubernetes-upgrade-168045 is active
	I0804 02:19:15.469327  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Getting domain xml...
	I0804 02:19:15.470282  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Creating domain...
	I0804 02:19:16.804812  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Waiting to get IP...
	I0804 02:19:16.805834  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:16.806254  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:16.806277  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:16.806229  137457 retry.go:31] will retry after 237.222569ms: waiting for machine to come up
	I0804 02:19:17.045029  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:17.045448  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:17.045482  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:17.045319  137457 retry.go:31] will retry after 380.786576ms: waiting for machine to come up
	I0804 02:19:17.427896  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:17.428379  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:17.428410  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:17.428340  137457 retry.go:31] will retry after 300.554568ms: waiting for machine to come up
	I0804 02:19:17.731068  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:17.731466  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:17.731571  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:17.731467  137457 retry.go:31] will retry after 514.551753ms: waiting for machine to come up
	I0804 02:19:18.247368  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:18.247874  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:18.247907  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:18.247821  137457 retry.go:31] will retry after 706.210544ms: waiting for machine to come up
	I0804 02:19:18.955055  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:18.955354  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:18.955375  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:18.955306  137457 retry.go:31] will retry after 747.075018ms: waiting for machine to come up
	I0804 02:19:19.704377  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:19.704934  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:19.704976  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:19.704846  137457 retry.go:31] will retry after 815.420135ms: waiting for machine to come up
	I0804 02:19:20.522191  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:20.522654  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:20.522677  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:20.522595  137457 retry.go:31] will retry after 1.21068082s: waiting for machine to come up
	I0804 02:19:21.734554  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:21.735033  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:21.735062  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:21.734960  137457 retry.go:31] will retry after 1.798983111s: waiting for machine to come up
	I0804 02:19:23.536045  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:23.536464  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:23.536497  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:23.536414  137457 retry.go:31] will retry after 1.932060162s: waiting for machine to come up
	I0804 02:19:25.469864  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:25.470294  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:25.470342  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:25.470250  137457 retry.go:31] will retry after 2.368814013s: waiting for machine to come up
	I0804 02:19:27.840707  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:27.841134  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:27.841168  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:27.841085  137457 retry.go:31] will retry after 2.614741855s: waiting for machine to come up
	I0804 02:19:30.457317  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:30.457861  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:30.457886  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:30.457811  137457 retry.go:31] will retry after 4.226076337s: waiting for machine to come up
	I0804 02:19:34.685335  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:34.685732  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find current IP address of domain kubernetes-upgrade-168045 in network mk-kubernetes-upgrade-168045
	I0804 02:19:34.685766  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | I0804 02:19:34.685671  137457 retry.go:31] will retry after 4.096410111s: waiting for machine to come up
	I0804 02:19:38.786373  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:38.786925  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has current primary IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:38.786942  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Found IP for machine: 192.168.50.156
	I0804 02:19:38.786955  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Reserving static IP address...
	I0804 02:19:38.787368  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-168045", mac: "52:54:00:dc:cc:20", ip: "192.168.50.156"} in network mk-kubernetes-upgrade-168045
	I0804 02:19:38.863535  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Reserved static IP address: 192.168.50.156
	I0804 02:19:38.863569  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Getting to WaitForSSH function...
	I0804 02:19:38.863579  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Waiting for SSH to be available...
	I0804 02:19:38.866100  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:38.866647  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:38.866671  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:38.866846  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Using SSH client type: external
	I0804 02:19:38.866867  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Using SSH private key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/id_rsa (-rw-------)
	I0804 02:19:38.866893  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0804 02:19:38.866926  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | About to run SSH command:
	I0804 02:19:38.866940  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | exit 0
	I0804 02:19:38.997245  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | SSH cmd err, output: <nil>: 
	I0804 02:19:38.997573  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) KVM machine creation complete!
	I0804 02:19:38.997905  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetConfigRaw
	I0804 02:19:38.998516  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:19:38.998744  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:19:38.998914  137098 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0804 02:19:38.998930  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetState
	I0804 02:19:39.000221  137098 main.go:141] libmachine: Detecting operating system of created instance...
	I0804 02:19:39.000240  137098 main.go:141] libmachine: Waiting for SSH to be available...
	I0804 02:19:39.000248  137098 main.go:141] libmachine: Getting to WaitForSSH function...
	I0804 02:19:39.000256  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:39.002757  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.003225  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:39.003276  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.003347  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:39.003545  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.003674  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.003805  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:39.003959  137098 main.go:141] libmachine: Using SSH client type: native
	I0804 02:19:39.004220  137098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.156 22 <nil> <nil>}
	I0804 02:19:39.004238  137098 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0804 02:19:39.120978  137098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 02:19:39.121002  137098 main.go:141] libmachine: Detecting the provisioner...
	I0804 02:19:39.121013  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:39.124087  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.124522  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:39.124564  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.124688  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:39.124914  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.125063  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.125217  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:39.125374  137098 main.go:141] libmachine: Using SSH client type: native
	I0804 02:19:39.125579  137098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.156 22 <nil> <nil>}
	I0804 02:19:39.125593  137098 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0804 02:19:39.242621  137098 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0804 02:19:39.242700  137098 main.go:141] libmachine: found compatible host: buildroot
	I0804 02:19:39.242709  137098 main.go:141] libmachine: Provisioning with buildroot...
	I0804 02:19:39.242720  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetMachineName
	I0804 02:19:39.243002  137098 buildroot.go:166] provisioning hostname "kubernetes-upgrade-168045"
	I0804 02:19:39.243039  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetMachineName
	I0804 02:19:39.243286  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:39.246022  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.246413  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:39.246443  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.246561  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:39.246740  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.246935  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.247123  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:39.247289  137098 main.go:141] libmachine: Using SSH client type: native
	I0804 02:19:39.247453  137098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.156 22 <nil> <nil>}
	I0804 02:19:39.247465  137098 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-168045 && echo "kubernetes-upgrade-168045" | sudo tee /etc/hostname
	I0804 02:19:39.380477  137098 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-168045
	
	I0804 02:19:39.380518  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:39.383506  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.383909  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:39.383938  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.384142  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:39.384350  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.384546  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.384730  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:39.384981  137098 main.go:141] libmachine: Using SSH client type: native
	I0804 02:19:39.385227  137098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.156 22 <nil> <nil>}
	I0804 02:19:39.385249  137098 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-168045' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-168045/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-168045' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 02:19:39.510431  137098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 02:19:39.510481  137098 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 02:19:39.510512  137098 buildroot.go:174] setting up certificates
	I0804 02:19:39.510532  137098 provision.go:84] configureAuth start
	I0804 02:19:39.510544  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetMachineName
	I0804 02:19:39.510838  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetIP
	I0804 02:19:39.513977  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.514345  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:39.514375  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.514541  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:39.516862  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.517229  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:39.517257  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.517403  137098 provision.go:143] copyHostCerts
	I0804 02:19:39.517470  137098 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 02:19:39.517480  137098 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 02:19:39.517538  137098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 02:19:39.517645  137098 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 02:19:39.517652  137098 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 02:19:39.517672  137098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 02:19:39.517741  137098 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 02:19:39.517749  137098 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 02:19:39.517766  137098 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 02:19:39.517824  137098 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-168045 san=[127.0.0.1 192.168.50.156 kubernetes-upgrade-168045 localhost minikube]
	I0804 02:19:39.707633  137098 provision.go:177] copyRemoteCerts
	I0804 02:19:39.707692  137098 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 02:19:39.707718  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:39.710791  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.711190  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:39.711235  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.711436  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:39.711645  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.711826  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:39.711985  137098 sshutil.go:53] new ssh client: &{IP:192.168.50.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/id_rsa Username:docker}
	I0804 02:19:39.801480  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 02:19:39.826455  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0804 02:19:39.851687  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0804 02:19:39.877672  137098 provision.go:87] duration metric: took 367.121321ms to configureAuth
	I0804 02:19:39.877702  137098 buildroot.go:189] setting minikube options for container-runtime
	I0804 02:19:39.877921  137098 config.go:182] Loaded profile config "kubernetes-upgrade-168045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0804 02:19:39.878017  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:39.880822  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.881166  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:39.881204  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:39.881323  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:39.881562  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.881723  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:39.881860  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:39.882039  137098 main.go:141] libmachine: Using SSH client type: native
	I0804 02:19:39.882245  137098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.156 22 <nil> <nil>}
	I0804 02:19:39.882263  137098 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 02:19:40.168260  137098 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 02:19:40.168300  137098 main.go:141] libmachine: Checking connection to Docker...
	I0804 02:19:40.168313  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetURL
	I0804 02:19:40.169859  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Using libvirt version 6000000
	I0804 02:19:40.172167  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.172466  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:40.172493  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.172673  137098 main.go:141] libmachine: Docker is up and running!
	I0804 02:19:40.172689  137098 main.go:141] libmachine: Reticulating splines...
	I0804 02:19:40.172704  137098 client.go:171] duration metric: took 25.214553025s to LocalClient.Create
	I0804 02:19:40.172725  137098 start.go:167] duration metric: took 25.214616845s to libmachine.API.Create "kubernetes-upgrade-168045"
	I0804 02:19:40.172735  137098 start.go:293] postStartSetup for "kubernetes-upgrade-168045" (driver="kvm2")
	I0804 02:19:40.172745  137098 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 02:19:40.172762  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:19:40.173016  137098 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 02:19:40.173040  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:40.175349  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.175668  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:40.175707  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.175872  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:40.176049  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:40.176234  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:40.176364  137098 sshutil.go:53] new ssh client: &{IP:192.168.50.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/id_rsa Username:docker}
	I0804 02:19:40.264560  137098 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 02:19:40.269056  137098 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 02:19:40.269091  137098 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 02:19:40.269188  137098 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 02:19:40.269294  137098 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 02:19:40.269445  137098 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 02:19:40.281829  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 02:19:40.306415  137098 start.go:296] duration metric: took 133.660389ms for postStartSetup
	I0804 02:19:40.306529  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetConfigRaw
	I0804 02:19:40.307170  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetIP
	I0804 02:19:40.310236  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.310673  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:40.310708  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.310996  137098 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/config.json ...
	I0804 02:19:40.311297  137098 start.go:128] duration metric: took 25.376591575s to createHost
	I0804 02:19:40.311329  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:40.313868  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.314203  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:40.314234  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.314378  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:40.314578  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:40.314722  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:40.314845  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:40.314991  137098 main.go:141] libmachine: Using SSH client type: native
	I0804 02:19:40.315166  137098 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.156 22 <nil> <nil>}
	I0804 02:19:40.315177  137098 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0804 02:19:40.434351  137098 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722737980.410155315
	
	I0804 02:19:40.434377  137098 fix.go:216] guest clock: 1722737980.410155315
	I0804 02:19:40.434386  137098 fix.go:229] Guest: 2024-08-04 02:19:40.410155315 +0000 UTC Remote: 2024-08-04 02:19:40.311312033 +0000 UTC m=+53.623556730 (delta=98.843282ms)
	I0804 02:19:40.434410  137098 fix.go:200] guest clock delta is within tolerance: 98.843282ms
	I0804 02:19:40.434416  137098 start.go:83] releasing machines lock for "kubernetes-upgrade-168045", held for 25.499872872s
	I0804 02:19:40.434436  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:19:40.434733  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetIP
	I0804 02:19:40.437647  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.438113  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:40.438152  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.438432  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:19:40.439056  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:19:40.439277  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:19:40.439366  137098 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 02:19:40.439422  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:40.439542  137098 ssh_runner.go:195] Run: cat /version.json
	I0804 02:19:40.439570  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:19:40.442411  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.442578  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.442782  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:40.442811  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.443041  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:40.443064  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:40.443072  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:40.443248  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:19:40.443248  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:40.443425  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:19:40.443475  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:40.443559  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:19:40.443632  137098 sshutil.go:53] new ssh client: &{IP:192.168.50.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/id_rsa Username:docker}
	I0804 02:19:40.443652  137098 sshutil.go:53] new ssh client: &{IP:192.168.50.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/id_rsa Username:docker}
	I0804 02:19:40.552130  137098 ssh_runner.go:195] Run: systemctl --version
	I0804 02:19:40.558914  137098 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 02:19:40.716086  137098 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 02:19:40.722656  137098 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 02:19:40.722732  137098 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 02:19:40.740041  137098 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0804 02:19:40.740077  137098 start.go:495] detecting cgroup driver to use...
	I0804 02:19:40.740157  137098 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 02:19:40.757723  137098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 02:19:40.773584  137098 docker.go:217] disabling cri-docker service (if available) ...
	I0804 02:19:40.773653  137098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 02:19:40.790747  137098 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 02:19:40.806298  137098 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 02:19:40.928779  137098 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 02:19:41.072671  137098 docker.go:233] disabling docker service ...
	I0804 02:19:41.072758  137098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 02:19:41.090864  137098 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 02:19:41.103928  137098 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 02:19:41.249130  137098 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 02:19:41.388001  137098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 02:19:41.403226  137098 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 02:19:41.422718  137098 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0804 02:19:41.422791  137098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:19:41.434181  137098 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 02:19:41.434257  137098 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:19:41.445798  137098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:19:41.457234  137098 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:19:41.468700  137098 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 02:19:41.481223  137098 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 02:19:41.495236  137098 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0804 02:19:41.495317  137098 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0804 02:19:41.512865  137098 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 02:19:41.523526  137098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 02:19:41.665788  137098 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 02:19:41.816395  137098 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 02:19:41.816478  137098 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 02:19:41.821475  137098 start.go:563] Will wait 60s for crictl version
	I0804 02:19:41.821545  137098 ssh_runner.go:195] Run: which crictl
	I0804 02:19:41.825475  137098 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 02:19:41.872101  137098 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 02:19:41.872215  137098 ssh_runner.go:195] Run: crio --version
	I0804 02:19:41.905926  137098 ssh_runner.go:195] Run: crio --version
	I0804 02:19:41.939533  137098 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0804 02:19:41.940987  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetIP
	I0804 02:19:41.944106  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:41.944512  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:19:41.944553  137098 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:19:41.944797  137098 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0804 02:19:41.949437  137098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 02:19:41.963399  137098 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-168045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-168045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.156 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 02:19:41.963573  137098 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 02:19:41.963633  137098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 02:19:41.997454  137098 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 02:19:41.997540  137098 ssh_runner.go:195] Run: which lz4
	I0804 02:19:42.001562  137098 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0804 02:19:42.006165  137098 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0804 02:19:42.006208  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0804 02:19:43.762549  137098 crio.go:462] duration metric: took 1.761025877s to copy over tarball
	I0804 02:19:43.762649  137098 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0804 02:19:46.686078  137098 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.923388415s)
	I0804 02:19:46.686110  137098 crio.go:469] duration metric: took 2.923525148s to extract the tarball
	I0804 02:19:46.686118  137098 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0804 02:19:46.731164  137098 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 02:19:46.783841  137098 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0804 02:19:46.783873  137098 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0804 02:19:46.783974  137098 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 02:19:46.784051  137098 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 02:19:46.784092  137098 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0804 02:19:46.784045  137098 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 02:19:46.784166  137098 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0804 02:19:46.783977  137098 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 02:19:46.783984  137098 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 02:19:46.783984  137098 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0804 02:19:46.785600  137098 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 02:19:46.785764  137098 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0804 02:19:46.785823  137098 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 02:19:46.785765  137098 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 02:19:46.785858  137098 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0804 02:19:46.786184  137098 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 02:19:46.786259  137098 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 02:19:46.786591  137098 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0804 02:19:46.905089  137098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0804 02:19:46.934123  137098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0804 02:19:46.946753  137098 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0804 02:19:46.946798  137098 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0804 02:19:46.946855  137098 ssh_runner.go:195] Run: which crictl
	I0804 02:19:46.986434  137098 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0804 02:19:46.986492  137098 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0804 02:19:46.986511  137098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0804 02:19:46.986542  137098 ssh_runner.go:195] Run: which crictl
	I0804 02:19:47.001714  137098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0804 02:19:47.021140  137098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0804 02:19:47.021201  137098 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0804 02:19:47.060010  137098 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0804 02:19:47.060072  137098 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0804 02:19:47.060131  137098 ssh_runner.go:195] Run: which crictl
	I0804 02:19:47.065929  137098 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0804 02:19:47.068376  137098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0804 02:19:47.103479  137098 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0804 02:19:47.114109  137098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0804 02:19:47.114684  137098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0804 02:19:47.116018  137098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 02:19:47.132917  137098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0804 02:19:47.214225  137098 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0804 02:19:47.214257  137098 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0804 02:19:47.214288  137098 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0804 02:19:47.214298  137098 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0804 02:19:47.214344  137098 ssh_runner.go:195] Run: which crictl
	I0804 02:19:47.214344  137098 ssh_runner.go:195] Run: which crictl
	I0804 02:19:47.228208  137098 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0804 02:19:47.228251  137098 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 02:19:47.228293  137098 ssh_runner.go:195] Run: which crictl
	I0804 02:19:47.229552  137098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0804 02:19:47.229564  137098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0804 02:19:47.229746  137098 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0804 02:19:47.229780  137098 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0804 02:19:47.229814  137098 ssh_runner.go:195] Run: which crictl
	I0804 02:19:47.233382  137098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0804 02:19:47.300973  137098 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0804 02:19:47.301029  137098 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0804 02:19:47.301089  137098 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0804 02:19:47.317436  137098 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0804 02:19:47.336835  137098 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0804 02:19:47.757449  137098 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 02:19:47.904228  137098 cache_images.go:92] duration metric: took 1.120333201s to LoadCachedImages
	W0804 02:19:47.904336  137098 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-90243/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19364-90243/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0804 02:19:47.904358  137098 kubeadm.go:934] updating node { 192.168.50.156 8443 v1.20.0 crio true true} ...
	I0804 02:19:47.904506  137098 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-168045 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-168045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 02:19:47.904575  137098 ssh_runner.go:195] Run: crio config
	I0804 02:19:47.971796  137098 cni.go:84] Creating CNI manager for ""
	I0804 02:19:47.971828  137098 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 02:19:47.971846  137098 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 02:19:47.971869  137098 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.156 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-168045 NodeName:kubernetes-upgrade-168045 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0804 02:19:47.972063  137098 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-168045"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 02:19:47.972147  137098 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0804 02:19:47.983031  137098 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 02:19:47.983113  137098 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 02:19:47.993459  137098 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0804 02:19:48.012379  137098 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 02:19:48.029671  137098 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0804 02:19:48.047964  137098 ssh_runner.go:195] Run: grep 192.168.50.156	control-plane.minikube.internal$ /etc/hosts
	I0804 02:19:48.052050  137098 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0804 02:19:48.064626  137098 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 02:19:48.196095  137098 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 02:19:48.214893  137098 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045 for IP: 192.168.50.156
	I0804 02:19:48.214917  137098 certs.go:194] generating shared ca certs ...
	I0804 02:19:48.214938  137098 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:19:48.215147  137098 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 02:19:48.215209  137098 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 02:19:48.215224  137098 certs.go:256] generating profile certs ...
	I0804 02:19:48.215299  137098 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/client.key
	I0804 02:19:48.215318  137098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/client.crt with IP's: []
	I0804 02:19:48.427980  137098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/client.crt ...
	I0804 02:19:48.428021  137098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/client.crt: {Name:mk3d46c72ed5cdf137e3c7311acf3d2967cfac27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:19:48.428252  137098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/client.key ...
	I0804 02:19:48.428276  137098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/client.key: {Name:mk5d4757de1c9709c3e8cc0374c7beb773de7694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:19:48.428398  137098 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.key.f82d62f5
	I0804 02:19:48.428422  137098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.crt.f82d62f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.156]
	I0804 02:19:48.482520  137098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.crt.f82d62f5 ...
	I0804 02:19:48.482553  137098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.crt.f82d62f5: {Name:mk9bda4d9f212f687dceb6b3ba34498f325dc8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:19:48.530936  137098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.key.f82d62f5 ...
	I0804 02:19:48.530981  137098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.key.f82d62f5: {Name:mk569cd6064e75afe882fae57d551af315b7f9f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:19:48.531133  137098 certs.go:381] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.crt.f82d62f5 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.crt
	I0804 02:19:48.531259  137098 certs.go:385] copying /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.key.f82d62f5 -> /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.key
	I0804 02:19:48.531346  137098 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/proxy-client.key
	I0804 02:19:48.531366  137098 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/proxy-client.crt with IP's: []
	I0804 02:19:48.847356  137098 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/proxy-client.crt ...
	I0804 02:19:48.847396  137098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/proxy-client.crt: {Name:mk18bc7a7db9ca54fd6588422852c3addcefaaf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:19:48.847620  137098 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/proxy-client.key ...
	I0804 02:19:48.847648  137098 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/proxy-client.key: {Name:mk8092409b7ddaff067e5225ebeed98af5f8c008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:19:48.847929  137098 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 02:19:48.847990  137098 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 02:19:48.848006  137098 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 02:19:48.848039  137098 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 02:19:48.848074  137098 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 02:19:48.848108  137098 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 02:19:48.848166  137098 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 02:19:48.849090  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 02:19:48.877903  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 02:19:48.906699  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 02:19:48.933810  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 02:19:48.964950  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0804 02:19:48.993965  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0804 02:19:49.024175  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 02:19:49.053662  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0804 02:19:49.090948  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 02:19:49.123863  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 02:19:49.162308  137098 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 02:19:49.215072  137098 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 02:19:49.232347  137098 ssh_runner.go:195] Run: openssl version
	I0804 02:19:49.239082  137098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 02:19:49.250238  137098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:19:49.255440  137098 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:19:49.255515  137098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:19:49.263864  137098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 02:19:49.280193  137098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 02:19:49.295069  137098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 02:19:49.300043  137098 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 02:19:49.300143  137098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 02:19:49.306520  137098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 02:19:49.318024  137098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 02:19:49.329037  137098 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 02:19:49.335248  137098 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 02:19:49.335328  137098 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 02:19:49.341411  137098 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 02:19:49.352945  137098 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 02:19:49.357771  137098 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0804 02:19:49.357841  137098 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-168045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-168045 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.156 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:19:49.357940  137098 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 02:19:49.358035  137098 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 02:19:49.406131  137098 cri.go:89] found id: ""
	I0804 02:19:49.406226  137098 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0804 02:19:49.416925  137098 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 02:19:49.427798  137098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 02:19:49.438786  137098 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 02:19:49.438814  137098 kubeadm.go:157] found existing configuration files:
	
	I0804 02:19:49.438881  137098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 02:19:49.449414  137098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 02:19:49.449498  137098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 02:19:49.460018  137098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 02:19:49.470489  137098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 02:19:49.470567  137098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 02:19:49.481660  137098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 02:19:49.491667  137098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 02:19:49.491742  137098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 02:19:49.505199  137098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 02:19:49.518929  137098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 02:19:49.519013  137098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 02:19:49.532274  137098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 02:19:49.680847  137098 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 02:19:49.681232  137098 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 02:19:49.877819  137098 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 02:19:49.878061  137098 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 02:19:49.878238  137098 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 02:19:50.154817  137098 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 02:19:50.157881  137098 out.go:204]   - Generating certificates and keys ...
	I0804 02:19:50.158010  137098 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 02:19:50.158120  137098 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 02:19:50.404911  137098 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0804 02:19:50.541789  137098 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0804 02:19:50.669802  137098 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0804 02:19:50.837973  137098 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0804 02:19:51.161684  137098 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0804 02:19:51.161875  137098 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-168045 localhost] and IPs [192.168.50.156 127.0.0.1 ::1]
	I0804 02:19:51.225443  137098 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0804 02:19:51.225699  137098 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-168045 localhost] and IPs [192.168.50.156 127.0.0.1 ::1]
	I0804 02:19:51.431639  137098 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0804 02:19:51.522539  137098 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0804 02:19:51.588632  137098 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0804 02:19:51.588843  137098 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 02:19:51.712919  137098 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 02:19:51.875341  137098 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 02:19:52.213267  137098 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 02:19:52.363858  137098 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 02:19:52.380412  137098 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 02:19:52.381688  137098 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 02:19:52.381764  137098 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 02:19:52.532751  137098 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 02:19:52.535102  137098 out.go:204]   - Booting up control plane ...
	I0804 02:19:52.535271  137098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 02:19:52.540634  137098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 02:19:52.543997  137098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 02:19:52.545361  137098 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 02:19:52.551963  137098 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 02:20:32.548625  137098 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 02:20:32.548758  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:20:32.549017  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:20:37.548857  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:20:37.549095  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:20:47.548213  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:20:47.548484  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:21:07.548048  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:21:07.548341  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:21:47.549585  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:21:47.549881  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:21:47.549906  137098 kubeadm.go:310] 
	I0804 02:21:47.549963  137098 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 02:21:47.550014  137098 kubeadm.go:310] 		timed out waiting for the condition
	I0804 02:21:47.550027  137098 kubeadm.go:310] 
	I0804 02:21:47.550078  137098 kubeadm.go:310] 	This error is likely caused by:
	I0804 02:21:47.550135  137098 kubeadm.go:310] 		- The kubelet is not running
	I0804 02:21:47.550300  137098 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 02:21:47.550312  137098 kubeadm.go:310] 
	I0804 02:21:47.550474  137098 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 02:21:47.550535  137098 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 02:21:47.550581  137098 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 02:21:47.550594  137098 kubeadm.go:310] 
	I0804 02:21:47.550735  137098 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 02:21:47.550850  137098 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 02:21:47.550862  137098 kubeadm.go:310] 
	I0804 02:21:47.551002  137098 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 02:21:47.551120  137098 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 02:21:47.551226  137098 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 02:21:47.551324  137098 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 02:21:47.551336  137098 kubeadm.go:310] 
	I0804 02:21:47.551707  137098 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 02:21:47.551846  137098 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 02:21:47.551941  137098 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0804 02:21:47.552082  137098 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-168045 localhost] and IPs [192.168.50.156 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-168045 localhost] and IPs [192.168.50.156 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-168045 localhost] and IPs [192.168.50.156 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-168045 localhost] and IPs [192.168.50.156 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0804 02:21:47.552142  137098 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0804 02:21:49.514143  137098 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.961955448s)
	I0804 02:21:49.514237  137098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 02:21:49.530138  137098 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 02:21:49.544781  137098 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0804 02:21:49.544805  137098 kubeadm.go:157] found existing configuration files:
	
	I0804 02:21:49.544854  137098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 02:21:49.556559  137098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0804 02:21:49.556645  137098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0804 02:21:49.567385  137098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 02:21:49.577547  137098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0804 02:21:49.577626  137098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0804 02:21:49.588381  137098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 02:21:49.598077  137098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0804 02:21:49.598170  137098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 02:21:49.609051  137098 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 02:21:49.619283  137098 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0804 02:21:49.619357  137098 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 02:21:49.629608  137098 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0804 02:21:49.709940  137098 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0804 02:21:49.710057  137098 kubeadm.go:310] [preflight] Running pre-flight checks
	I0804 02:21:49.853979  137098 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0804 02:21:49.854148  137098 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0804 02:21:49.854304  137098 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0804 02:21:50.038970  137098 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0804 02:21:50.040871  137098 out.go:204]   - Generating certificates and keys ...
	I0804 02:21:50.040955  137098 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0804 02:21:50.041034  137098 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0804 02:21:50.041199  137098 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0804 02:21:50.041302  137098 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0804 02:21:50.041430  137098 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0804 02:21:50.044646  137098 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0804 02:21:50.046119  137098 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0804 02:21:50.048326  137098 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0804 02:21:50.049451  137098 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0804 02:21:50.050301  137098 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0804 02:21:50.050625  137098 kubeadm.go:310] [certs] Using the existing "sa" key
	I0804 02:21:50.050719  137098 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0804 02:21:50.330760  137098 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0804 02:21:50.515454  137098 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0804 02:21:50.822283  137098 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0804 02:21:51.194527  137098 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0804 02:21:51.210816  137098 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0804 02:21:51.211904  137098 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0804 02:21:51.211987  137098 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0804 02:21:51.352320  137098 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0804 02:21:51.355227  137098 out.go:204]   - Booting up control plane ...
	I0804 02:21:51.355363  137098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0804 02:21:51.367833  137098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0804 02:21:51.369090  137098 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0804 02:21:51.371663  137098 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0804 02:21:51.372953  137098 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0804 02:22:31.375352  137098 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0804 02:22:31.375971  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:22:31.376241  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:22:36.376757  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:22:36.377065  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:22:46.377395  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:22:46.377693  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:23:06.377164  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:23:06.377452  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:23:46.377124  137098 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0804 02:23:46.377878  137098 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0804 02:23:46.377909  137098 kubeadm.go:310] 
	I0804 02:23:46.377962  137098 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0804 02:23:46.378019  137098 kubeadm.go:310] 		timed out waiting for the condition
	I0804 02:23:46.378029  137098 kubeadm.go:310] 
	I0804 02:23:46.378065  137098 kubeadm.go:310] 	This error is likely caused by:
	I0804 02:23:46.378133  137098 kubeadm.go:310] 		- The kubelet is not running
	I0804 02:23:46.378290  137098 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0804 02:23:46.378317  137098 kubeadm.go:310] 
	I0804 02:23:46.378462  137098 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0804 02:23:46.378507  137098 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0804 02:23:46.378579  137098 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0804 02:23:46.378595  137098 kubeadm.go:310] 
	I0804 02:23:46.378741  137098 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0804 02:23:46.378905  137098 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0804 02:23:46.378919  137098 kubeadm.go:310] 
	I0804 02:23:46.379078  137098 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0804 02:23:46.379201  137098 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0804 02:23:46.379308  137098 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0804 02:23:46.379405  137098 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0804 02:23:46.379421  137098 kubeadm.go:310] 
	I0804 02:23:46.380085  137098 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0804 02:23:46.380203  137098 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0804 02:23:46.380288  137098 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0804 02:23:46.380370  137098 kubeadm.go:394] duration metric: took 3m57.022534041s to StartCluster
	I0804 02:23:46.380425  137098 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 02:23:46.380500  137098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 02:23:46.428944  137098 cri.go:89] found id: ""
	I0804 02:23:46.428973  137098 logs.go:276] 0 containers: []
	W0804 02:23:46.428981  137098 logs.go:278] No container was found matching "kube-apiserver"
	I0804 02:23:46.428988  137098 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 02:23:46.429068  137098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 02:23:46.481514  137098 cri.go:89] found id: ""
	I0804 02:23:46.481543  137098 logs.go:276] 0 containers: []
	W0804 02:23:46.481555  137098 logs.go:278] No container was found matching "etcd"
	I0804 02:23:46.481562  137098 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 02:23:46.481622  137098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 02:23:46.522167  137098 cri.go:89] found id: ""
	I0804 02:23:46.522196  137098 logs.go:276] 0 containers: []
	W0804 02:23:46.522204  137098 logs.go:278] No container was found matching "coredns"
	I0804 02:23:46.522210  137098 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 02:23:46.522262  137098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 02:23:46.569486  137098 cri.go:89] found id: ""
	I0804 02:23:46.569519  137098 logs.go:276] 0 containers: []
	W0804 02:23:46.569530  137098 logs.go:278] No container was found matching "kube-scheduler"
	I0804 02:23:46.569539  137098 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 02:23:46.569605  137098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 02:23:46.622212  137098 cri.go:89] found id: ""
	I0804 02:23:46.622244  137098 logs.go:276] 0 containers: []
	W0804 02:23:46.622256  137098 logs.go:278] No container was found matching "kube-proxy"
	I0804 02:23:46.622264  137098 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 02:23:46.622326  137098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 02:23:46.655752  137098 cri.go:89] found id: ""
	I0804 02:23:46.655784  137098 logs.go:276] 0 containers: []
	W0804 02:23:46.655795  137098 logs.go:278] No container was found matching "kube-controller-manager"
	I0804 02:23:46.655801  137098 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 02:23:46.655879  137098 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 02:23:46.692730  137098 cri.go:89] found id: ""
	I0804 02:23:46.692763  137098 logs.go:276] 0 containers: []
	W0804 02:23:46.692772  137098 logs.go:278] No container was found matching "kindnet"
	I0804 02:23:46.692784  137098 logs.go:123] Gathering logs for dmesg ...
	I0804 02:23:46.692799  137098 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 02:23:46.709835  137098 logs.go:123] Gathering logs for describe nodes ...
	I0804 02:23:46.709876  137098 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 02:23:46.867577  137098 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 02:23:46.867610  137098 logs.go:123] Gathering logs for CRI-O ...
	I0804 02:23:46.867629  137098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 02:23:46.973008  137098 logs.go:123] Gathering logs for container status ...
	I0804 02:23:46.973053  137098 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 02:23:47.032349  137098 logs.go:123] Gathering logs for kubelet ...
	I0804 02:23:47.032392  137098 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0804 02:23:47.098477  137098 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0804 02:23:47.098541  137098 out.go:239] * 
	* 
	W0804 02:23:47.098632  137098 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 02:23:47.098665  137098 out.go:239] * 
	* 
	W0804 02:23:47.099886  137098 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0804 02:23:47.103930  137098 out.go:177] 
	W0804 02:23:47.105737  137098 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0804 02:23:47.105813  137098 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0804 02:23:47.105840  137098 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0804 02:23:47.107947  137098 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-168045 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-168045
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-168045: (2.699217722s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-168045 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-168045 status --format={{.Host}}: exit status 7 (86.843147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168045 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-168045 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.357561551s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-168045 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168045 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-168045 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.719164ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-168045] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-168045
	    minikube start -p kubernetes-upgrade-168045 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1680452 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-168045 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-168045 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-168045 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.835749768s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-04 02:25:41.298043068 +0000 UTC m=+6220.115551781
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-168045 -n kubernetes-upgrade-168045
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-168045 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-168045 logs -n 25: (1.675453027s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-821361 sudo              | cilium-821361             | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC |                     |
	|         | containerd config dump             |                           |         |         |                     |                     |
	| ssh     | -p cilium-821361 sudo              | cilium-821361             | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC |                     |
	|         | systemctl status crio --all        |                           |         |         |                     |                     |
	|         | --full --no-pager                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-821361 sudo              | cilium-821361             | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC |                     |
	|         | systemctl cat crio --no-pager      |                           |         |         |                     |                     |
	| ssh     | -p cilium-821361 sudo find         | cilium-821361             | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-821361 sudo crio         | cilium-821361             | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-821361                   | cilium-821361             | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC | 04 Aug 24 02:22 UTC |
	| start   | -p force-systemd-flag-156304       | force-systemd-flag-156304 | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC | 04 Aug 24 02:22 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-000030             | NoKubernetes-000030       | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC | 04 Aug 24 02:22 UTC |
	| start   | -p NoKubernetes-000030             | NoKubernetes-000030       | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC | 04 Aug 24 02:22 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-156304 ssh cat  | force-systemd-flag-156304 | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC | 04 Aug 24 02:22 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-156304       | force-systemd-flag-156304 | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC | 04 Aug 24 02:22 UTC |
	| start   | -p force-systemd-env-974508        | force-systemd-env-974508  | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC | 04 Aug 24 02:23 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-000030 sudo        | NoKubernetes-000030       | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-000030             | NoKubernetes-000030       | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC | 04 Aug 24 02:22 UTC |
	| start   | -p cert-expiration-362636          | cert-expiration-362636    | jenkins | v1.33.1 | 04 Aug 24 02:22 UTC | 04 Aug 24 02:23 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-974508        | force-systemd-env-974508  | jenkins | v1.33.1 | 04 Aug 24 02:23 UTC | 04 Aug 24 02:23 UTC |
	| start   | -p stopped-upgrade-866998          | minikube                  | jenkins | v1.26.0 | 04 Aug 24 02:23 UTC | 04 Aug 24 02:24 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-168045       | kubernetes-upgrade-168045 | jenkins | v1.33.1 | 04 Aug 24 02:23 UTC | 04 Aug 24 02:23 UTC |
	| start   | -p kubernetes-upgrade-168045       | kubernetes-upgrade-168045 | jenkins | v1.33.1 | 04 Aug 24 02:23 UTC | 04 Aug 24 02:24 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0  |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-866998 stop        | minikube                  | jenkins | v1.26.0 | 04 Aug 24 02:24 UTC | 04 Aug 24 02:24 UTC |
	| start   | -p stopped-upgrade-866998          | stopped-upgrade-866998    | jenkins | v1.33.1 | 04 Aug 24 02:24 UTC | 04 Aug 24 02:25 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-168045       | kubernetes-upgrade-168045 | jenkins | v1.33.1 | 04 Aug 24 02:24 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-168045       | kubernetes-upgrade-168045 | jenkins | v1.33.1 | 04 Aug 24 02:24 UTC | 04 Aug 24 02:25 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0  |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-866998          | stopped-upgrade-866998    | jenkins | v1.33.1 | 04 Aug 24 02:25 UTC | 04 Aug 24 02:25 UTC |
	| start   | -p cert-options-933588             | cert-options-933588       | jenkins | v1.33.1 | 04 Aug 24 02:25 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 02:25:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 02:25:19.061483  144651 out.go:291] Setting OutFile to fd 1 ...
	I0804 02:25:19.061602  144651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:25:19.061606  144651 out.go:304] Setting ErrFile to fd 2...
	I0804 02:25:19.061609  144651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:25:19.062276  144651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 02:25:19.063259  144651 out.go:298] Setting JSON to false
	I0804 02:25:19.064315  144651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14863,"bootTime":1722723456,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 02:25:19.064383  144651 start.go:139] virtualization: kvm guest
	I0804 02:25:19.066597  144651 out.go:177] * [cert-options-933588] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 02:25:19.068017  144651 notify.go:220] Checking for updates...
	I0804 02:25:19.068061  144651 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 02:25:19.069375  144651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 02:25:19.070796  144651 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 02:25:19.072103  144651 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:25:19.073587  144651 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 02:25:19.074917  144651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 02:25:19.076560  144651 config.go:182] Loaded profile config "cert-expiration-362636": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 02:25:19.076646  144651 config.go:182] Loaded profile config "kubernetes-upgrade-168045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 02:25:19.076752  144651 config.go:182] Loaded profile config "pause-141370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 02:25:19.076831  144651 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 02:25:19.115483  144651 out.go:177] * Using the kvm2 driver based on user configuration
	I0804 02:25:19.116771  144651 start.go:297] selected driver: kvm2
	I0804 02:25:19.116776  144651 start.go:901] validating driver "kvm2" against <nil>
	I0804 02:25:19.116791  144651 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 02:25:19.117512  144651 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:25:19.117591  144651 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 02:25:19.133527  144651 install.go:137] /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 02:25:19.133569  144651 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 02:25:19.133780  144651 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 02:25:19.133856  144651 cni.go:84] Creating CNI manager for ""
	I0804 02:25:19.133864  144651 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 02:25:19.133870  144651 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 02:25:19.133924  144651 start.go:340] cluster config:
	{Name:cert-options-933588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-933588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0804 02:25:19.134024  144651 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:25:19.135950  144651 out.go:177] * Starting "cert-options-933588" primary control-plane node in "cert-options-933588" cluster
	I0804 02:25:19.137260  144651 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 02:25:19.137294  144651 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 02:25:19.137300  144651 cache.go:56] Caching tarball of preloaded images
	I0804 02:25:19.137395  144651 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 02:25:19.137405  144651 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 02:25:19.137527  144651 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/cert-options-933588/config.json ...
	I0804 02:25:19.137542  144651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/cert-options-933588/config.json: {Name:mkef461c3c8113962d381efcf9f4e69c59067e33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:25:19.137677  144651 start.go:360] acquireMachinesLock for cert-options-933588: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 02:25:19.137705  144651 start.go:364] duration metric: took 18.974µs to acquireMachinesLock for "cert-options-933588"
	I0804 02:25:19.137726  144651 start.go:93] Provisioning new machine with config: &{Name:cert-options-933588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-933588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 02:25:19.137774  144651 start.go:125] createHost starting for "" (driver="kvm2")
	I0804 02:25:19.139389  144651 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0804 02:25:19.139536  144651 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:25:19.139578  144651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:25:19.155143  144651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40883
	I0804 02:25:19.155602  144651 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:25:19.156185  144651 main.go:141] libmachine: Using API Version  1
	I0804 02:25:19.156203  144651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:25:19.156512  144651 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:25:19.156684  144651 main.go:141] libmachine: (cert-options-933588) Calling .GetMachineName
	I0804 02:25:19.156838  144651 main.go:141] libmachine: (cert-options-933588) Calling .DriverName
	I0804 02:25:19.156980  144651 start.go:159] libmachine.API.Create for "cert-options-933588" (driver="kvm2")
	I0804 02:25:19.157006  144651 client.go:168] LocalClient.Create starting
	I0804 02:25:19.157037  144651 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem
	I0804 02:25:19.157065  144651 main.go:141] libmachine: Decoding PEM data...
	I0804 02:25:19.157084  144651 main.go:141] libmachine: Parsing certificate...
	I0804 02:25:19.157139  144651 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem
	I0804 02:25:19.157154  144651 main.go:141] libmachine: Decoding PEM data...
	I0804 02:25:19.157165  144651 main.go:141] libmachine: Parsing certificate...
	I0804 02:25:19.157177  144651 main.go:141] libmachine: Running pre-create checks...
	I0804 02:25:19.157182  144651 main.go:141] libmachine: (cert-options-933588) Calling .PreCreateCheck
	I0804 02:25:19.157571  144651 main.go:141] libmachine: (cert-options-933588) Calling .GetConfigRaw
	I0804 02:25:19.157993  144651 main.go:141] libmachine: Creating machine...
	I0804 02:25:19.158014  144651 main.go:141] libmachine: (cert-options-933588) Calling .Create
	I0804 02:25:19.158177  144651 main.go:141] libmachine: (cert-options-933588) Creating KVM machine...
	I0804 02:25:19.159895  144651 main.go:141] libmachine: (cert-options-933588) DBG | found existing default KVM network
	I0804 02:25:19.161187  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:19.160982  144674 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:3e:bc:d2} reservation:<nil>}
	I0804 02:25:19.161943  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:19.161840  144674 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:22:71:8f} reservation:<nil>}
	I0804 02:25:19.162676  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:19.162600  144674 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9d:c9:34} reservation:<nil>}
	I0804 02:25:19.163748  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:19.163683  144674 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a57d0}
	I0804 02:25:19.163848  144651 main.go:141] libmachine: (cert-options-933588) DBG | created network xml: 
	I0804 02:25:19.163857  144651 main.go:141] libmachine: (cert-options-933588) DBG | <network>
	I0804 02:25:19.163863  144651 main.go:141] libmachine: (cert-options-933588) DBG |   <name>mk-cert-options-933588</name>
	I0804 02:25:19.163867  144651 main.go:141] libmachine: (cert-options-933588) DBG |   <dns enable='no'/>
	I0804 02:25:19.163872  144651 main.go:141] libmachine: (cert-options-933588) DBG |   
	I0804 02:25:19.163877  144651 main.go:141] libmachine: (cert-options-933588) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0804 02:25:19.163882  144651 main.go:141] libmachine: (cert-options-933588) DBG |     <dhcp>
	I0804 02:25:19.163887  144651 main.go:141] libmachine: (cert-options-933588) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0804 02:25:19.163891  144651 main.go:141] libmachine: (cert-options-933588) DBG |     </dhcp>
	I0804 02:25:19.163895  144651 main.go:141] libmachine: (cert-options-933588) DBG |   </ip>
	I0804 02:25:19.163899  144651 main.go:141] libmachine: (cert-options-933588) DBG |   
	I0804 02:25:19.163902  144651 main.go:141] libmachine: (cert-options-933588) DBG | </network>
	I0804 02:25:19.163908  144651 main.go:141] libmachine: (cert-options-933588) DBG | 
	I0804 02:25:19.169923  144651 main.go:141] libmachine: (cert-options-933588) DBG | trying to create private KVM network mk-cert-options-933588 192.168.72.0/24...
	I0804 02:25:19.242084  144651 main.go:141] libmachine: (cert-options-933588) DBG | private KVM network mk-cert-options-933588 192.168.72.0/24 created
	I0804 02:25:19.242110  144651 main.go:141] libmachine: (cert-options-933588) Setting up store path in /home/jenkins/minikube-integration/19364-90243/.minikube/machines/cert-options-933588 ...
	I0804 02:25:19.242121  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:19.242066  144674 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:25:19.242139  144651 main.go:141] libmachine: (cert-options-933588) Building disk image from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 02:25:19.242239  144651 main.go:141] libmachine: (cert-options-933588) Downloading /home/jenkins/minikube-integration/19364-90243/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0804 02:25:19.519897  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:19.519772  144674 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/cert-options-933588/id_rsa...
	I0804 02:25:19.782911  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:19.782768  144674 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/cert-options-933588/cert-options-933588.rawdisk...
	I0804 02:25:19.782923  144651 main.go:141] libmachine: (cert-options-933588) DBG | Writing magic tar header
	I0804 02:25:19.782935  144651 main.go:141] libmachine: (cert-options-933588) DBG | Writing SSH key tar header
	I0804 02:25:19.782941  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:19.782902  144674 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/cert-options-933588 ...
	I0804 02:25:19.783233  144651 main.go:141] libmachine: (cert-options-933588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/cert-options-933588
	I0804 02:25:19.783307  144651 main.go:141] libmachine: (cert-options-933588) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines/cert-options-933588 (perms=drwx------)
	I0804 02:25:19.783332  144651 main.go:141] libmachine: (cert-options-933588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube/machines
	I0804 02:25:19.783342  144651 main.go:141] libmachine: (cert-options-933588) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube/machines (perms=drwxr-xr-x)
	I0804 02:25:19.783358  144651 main.go:141] libmachine: (cert-options-933588) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243/.minikube (perms=drwxr-xr-x)
	I0804 02:25:19.783438  144651 main.go:141] libmachine: (cert-options-933588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:25:19.783454  144651 main.go:141] libmachine: (cert-options-933588) Setting executable bit set on /home/jenkins/minikube-integration/19364-90243 (perms=drwxrwxr-x)
	I0804 02:25:19.783463  144651 main.go:141] libmachine: (cert-options-933588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19364-90243
	I0804 02:25:19.783477  144651 main.go:141] libmachine: (cert-options-933588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0804 02:25:19.783483  144651 main.go:141] libmachine: (cert-options-933588) DBG | Checking permissions on dir: /home/jenkins
	I0804 02:25:19.783501  144651 main.go:141] libmachine: (cert-options-933588) DBG | Checking permissions on dir: /home
	I0804 02:25:19.783506  144651 main.go:141] libmachine: (cert-options-933588) DBG | Skipping /home - not owner
	I0804 02:25:19.783527  144651 main.go:141] libmachine: (cert-options-933588) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0804 02:25:19.783541  144651 main.go:141] libmachine: (cert-options-933588) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0804 02:25:19.783551  144651 main.go:141] libmachine: (cert-options-933588) Creating domain...
	I0804 02:25:19.785216  144651 main.go:141] libmachine: (cert-options-933588) define libvirt domain using xml: 
	I0804 02:25:19.785223  144651 main.go:141] libmachine: (cert-options-933588) <domain type='kvm'>
	I0804 02:25:19.785229  144651 main.go:141] libmachine: (cert-options-933588)   <name>cert-options-933588</name>
	I0804 02:25:19.785233  144651 main.go:141] libmachine: (cert-options-933588)   <memory unit='MiB'>2048</memory>
	I0804 02:25:19.785237  144651 main.go:141] libmachine: (cert-options-933588)   <vcpu>2</vcpu>
	I0804 02:25:19.785241  144651 main.go:141] libmachine: (cert-options-933588)   <features>
	I0804 02:25:19.785246  144651 main.go:141] libmachine: (cert-options-933588)     <acpi/>
	I0804 02:25:19.785250  144651 main.go:141] libmachine: (cert-options-933588)     <apic/>
	I0804 02:25:19.785254  144651 main.go:141] libmachine: (cert-options-933588)     <pae/>
	I0804 02:25:19.785257  144651 main.go:141] libmachine: (cert-options-933588)     
	I0804 02:25:19.785261  144651 main.go:141] libmachine: (cert-options-933588)   </features>
	I0804 02:25:19.785280  144651 main.go:141] libmachine: (cert-options-933588)   <cpu mode='host-passthrough'>
	I0804 02:25:19.785284  144651 main.go:141] libmachine: (cert-options-933588)   
	I0804 02:25:19.785287  144651 main.go:141] libmachine: (cert-options-933588)   </cpu>
	I0804 02:25:19.785291  144651 main.go:141] libmachine: (cert-options-933588)   <os>
	I0804 02:25:19.785294  144651 main.go:141] libmachine: (cert-options-933588)     <type>hvm</type>
	I0804 02:25:19.785319  144651 main.go:141] libmachine: (cert-options-933588)     <boot dev='cdrom'/>
	I0804 02:25:19.785327  144651 main.go:141] libmachine: (cert-options-933588)     <boot dev='hd'/>
	I0804 02:25:19.785333  144651 main.go:141] libmachine: (cert-options-933588)     <bootmenu enable='no'/>
	I0804 02:25:19.785369  144651 main.go:141] libmachine: (cert-options-933588)   </os>
	I0804 02:25:19.785378  144651 main.go:141] libmachine: (cert-options-933588)   <devices>
	I0804 02:25:19.785389  144651 main.go:141] libmachine: (cert-options-933588)     <disk type='file' device='cdrom'>
	I0804 02:25:19.785404  144651 main.go:141] libmachine: (cert-options-933588)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/cert-options-933588/boot2docker.iso'/>
	I0804 02:25:19.785408  144651 main.go:141] libmachine: (cert-options-933588)       <target dev='hdc' bus='scsi'/>
	I0804 02:25:19.785413  144651 main.go:141] libmachine: (cert-options-933588)       <readonly/>
	I0804 02:25:19.785416  144651 main.go:141] libmachine: (cert-options-933588)     </disk>
	I0804 02:25:19.785420  144651 main.go:141] libmachine: (cert-options-933588)     <disk type='file' device='disk'>
	I0804 02:25:19.785425  144651 main.go:141] libmachine: (cert-options-933588)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0804 02:25:19.785432  144651 main.go:141] libmachine: (cert-options-933588)       <source file='/home/jenkins/minikube-integration/19364-90243/.minikube/machines/cert-options-933588/cert-options-933588.rawdisk'/>
	I0804 02:25:19.785435  144651 main.go:141] libmachine: (cert-options-933588)       <target dev='hda' bus='virtio'/>
	I0804 02:25:19.785439  144651 main.go:141] libmachine: (cert-options-933588)     </disk>
	I0804 02:25:19.785443  144651 main.go:141] libmachine: (cert-options-933588)     <interface type='network'>
	I0804 02:25:19.785448  144651 main.go:141] libmachine: (cert-options-933588)       <source network='mk-cert-options-933588'/>
	I0804 02:25:19.785454  144651 main.go:141] libmachine: (cert-options-933588)       <model type='virtio'/>
	I0804 02:25:19.785458  144651 main.go:141] libmachine: (cert-options-933588)     </interface>
	I0804 02:25:19.785465  144651 main.go:141] libmachine: (cert-options-933588)     <interface type='network'>
	I0804 02:25:19.785473  144651 main.go:141] libmachine: (cert-options-933588)       <source network='default'/>
	I0804 02:25:19.785477  144651 main.go:141] libmachine: (cert-options-933588)       <model type='virtio'/>
	I0804 02:25:19.785480  144651 main.go:141] libmachine: (cert-options-933588)     </interface>
	I0804 02:25:19.785484  144651 main.go:141] libmachine: (cert-options-933588)     <serial type='pty'>
	I0804 02:25:19.785488  144651 main.go:141] libmachine: (cert-options-933588)       <target port='0'/>
	I0804 02:25:19.785490  144651 main.go:141] libmachine: (cert-options-933588)     </serial>
	I0804 02:25:19.785495  144651 main.go:141] libmachine: (cert-options-933588)     <console type='pty'>
	I0804 02:25:19.785498  144651 main.go:141] libmachine: (cert-options-933588)       <target type='serial' port='0'/>
	I0804 02:25:19.785502  144651 main.go:141] libmachine: (cert-options-933588)     </console>
	I0804 02:25:19.785505  144651 main.go:141] libmachine: (cert-options-933588)     <rng model='virtio'>
	I0804 02:25:19.785510  144651 main.go:141] libmachine: (cert-options-933588)       <backend model='random'>/dev/random</backend>
	I0804 02:25:19.785513  144651 main.go:141] libmachine: (cert-options-933588)     </rng>
	I0804 02:25:19.785516  144651 main.go:141] libmachine: (cert-options-933588)     
	I0804 02:25:19.785519  144651 main.go:141] libmachine: (cert-options-933588)     
	I0804 02:25:19.785523  144651 main.go:141] libmachine: (cert-options-933588)   </devices>
	I0804 02:25:19.785526  144651 main.go:141] libmachine: (cert-options-933588) </domain>
	I0804 02:25:19.785535  144651 main.go:141] libmachine: (cert-options-933588) 
	I0804 02:25:19.790051  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:76:f3:a3 in network default
	I0804 02:25:19.790539  144651 main.go:141] libmachine: (cert-options-933588) Ensuring networks are active...
	I0804 02:25:19.790551  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:19.791271  144651 main.go:141] libmachine: (cert-options-933588) Ensuring network default is active
	I0804 02:25:19.791534  144651 main.go:141] libmachine: (cert-options-933588) Ensuring network mk-cert-options-933588 is active
	I0804 02:25:19.792018  144651 main.go:141] libmachine: (cert-options-933588) Getting domain xml...
	I0804 02:25:19.792707  144651 main.go:141] libmachine: (cert-options-933588) Creating domain...
	I0804 02:25:21.047541  144651 main.go:141] libmachine: (cert-options-933588) Waiting to get IP...
	I0804 02:25:21.048358  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:21.048778  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:21.048844  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:21.048770  144674 retry.go:31] will retry after 196.016593ms: waiting for machine to come up
	I0804 02:25:21.246336  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:21.246952  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:21.246990  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:21.246914  144674 retry.go:31] will retry after 350.857529ms: waiting for machine to come up
	I0804 02:25:21.599452  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:21.599878  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:21.599927  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:21.599820  144674 retry.go:31] will retry after 297.600454ms: waiting for machine to come up
	I0804 02:25:21.899461  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:21.899939  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:21.899956  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:21.899910  144674 retry.go:31] will retry after 383.307459ms: waiting for machine to come up
	I0804 02:25:22.284474  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:22.284925  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:22.284938  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:22.284915  144674 retry.go:31] will retry after 705.912926ms: waiting for machine to come up
	I0804 02:25:22.993059  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:22.993546  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:22.993580  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:22.993507  144674 retry.go:31] will retry after 891.510522ms: waiting for machine to come up
	I0804 02:25:23.886718  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:23.887179  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:23.887192  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:23.887148  144674 retry.go:31] will retry after 916.870689ms: waiting for machine to come up
	I0804 02:25:23.075251  139087 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.076570907s)
	W0804 02:25:23.075318  139087 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0804 02:25:23.075331  139087 logs.go:123] Gathering logs for kube-apiserver [f19867f7c71514444768449b03ae2c858292a22849ed7ce08dc65b4e69e038ac] ...
	I0804 02:25:23.075351  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f19867f7c71514444768449b03ae2c858292a22849ed7ce08dc65b4e69e038ac"
	I0804 02:25:23.114412  139087 logs.go:123] Gathering logs for etcd [9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8] ...
	I0804 02:25:23.114449  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8"
	I0804 02:25:25.668616  139087 api_server.go:253] Checking apiserver healthz at https://192.168.61.197:8443/healthz ...
	I0804 02:25:24.805931  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:24.806399  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:24.806417  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:24.806352  144674 retry.go:31] will retry after 1.38877365s: waiting for machine to come up
	I0804 02:25:26.196714  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:26.197172  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:26.197192  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:26.197114  144674 retry.go:31] will retry after 1.847434127s: waiting for machine to come up
	I0804 02:25:28.046808  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:28.047438  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:28.047461  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:28.047377  144674 retry.go:31] will retry after 2.30107252s: waiting for machine to come up
	I0804 02:25:30.669863  139087 api_server.go:269] stopped: https://192.168.61.197:8443/healthz: Get "https://192.168.61.197:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0804 02:25:30.669950  139087 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 02:25:30.670013  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 02:25:30.707937  139087 cri.go:89] found id: "d91c888a8f20084f26936ceac1b834900400aee97cd480fc9c6207f2116c1adc"
	I0804 02:25:30.707969  139087 cri.go:89] found id: "f19867f7c71514444768449b03ae2c858292a22849ed7ce08dc65b4e69e038ac"
	I0804 02:25:30.707975  139087 cri.go:89] found id: ""
	I0804 02:25:30.707985  139087 logs.go:276] 2 containers: [d91c888a8f20084f26936ceac1b834900400aee97cd480fc9c6207f2116c1adc f19867f7c71514444768449b03ae2c858292a22849ed7ce08dc65b4e69e038ac]
	I0804 02:25:30.708049  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:30.712419  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:30.716670  139087 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 02:25:30.716731  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 02:25:30.753843  139087 cri.go:89] found id: "9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8"
	I0804 02:25:30.753876  139087 cri.go:89] found id: ""
	I0804 02:25:30.753891  139087 logs.go:276] 1 containers: [9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8]
	I0804 02:25:30.753967  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:30.758304  139087 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 02:25:30.758389  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 02:25:30.793660  139087 cri.go:89] found id: "18ef227d935fcdbf1b538a90e0b1d5dc3f3b9914b0aa637835daf3b7cb9ad438"
	I0804 02:25:30.793684  139087 cri.go:89] found id: "bc5102abb9d99d7952bfee28010e407182cd721a0312dbe0caab6909eabcabc1"
	I0804 02:25:30.793688  139087 cri.go:89] found id: ""
	I0804 02:25:30.793696  139087 logs.go:276] 2 containers: [18ef227d935fcdbf1b538a90e0b1d5dc3f3b9914b0aa637835daf3b7cb9ad438 bc5102abb9d99d7952bfee28010e407182cd721a0312dbe0caab6909eabcabc1]
	I0804 02:25:30.793757  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:30.798393  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:30.802610  139087 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 02:25:30.802686  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 02:25:30.852493  139087 cri.go:89] found id: "f80a0c7fd18ce6c3d13ff4ecebced4566ed5736a42b88a752f0356303e3ab05c"
	I0804 02:25:30.852519  139087 cri.go:89] found id: "1b40e4634e64ff938887afe15c4a849baece9f0c98e7014281801fd04ecf0a45"
	I0804 02:25:30.852523  139087 cri.go:89] found id: ""
	I0804 02:25:30.852530  139087 logs.go:276] 2 containers: [f80a0c7fd18ce6c3d13ff4ecebced4566ed5736a42b88a752f0356303e3ab05c 1b40e4634e64ff938887afe15c4a849baece9f0c98e7014281801fd04ecf0a45]
	I0804 02:25:30.852593  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:30.857500  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:30.861920  139087 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 02:25:30.861987  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 02:25:30.906253  139087 cri.go:89] found id: "db00066aed9aad1de7b99555f75d015875d199bc34d5199394030739e534a6b5"
	I0804 02:25:30.906288  139087 cri.go:89] found id: ""
	I0804 02:25:30.906298  139087 logs.go:276] 1 containers: [db00066aed9aad1de7b99555f75d015875d199bc34d5199394030739e534a6b5]
	I0804 02:25:30.906360  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:30.910766  139087 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 02:25:30.910855  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 02:25:30.950383  139087 cri.go:89] found id: "ef489f766399031aaff727648afe36fc26e54b6af1e756c827a447aeb1302d47"
	I0804 02:25:30.950412  139087 cri.go:89] found id: ""
	I0804 02:25:30.950422  139087 logs.go:276] 1 containers: [ef489f766399031aaff727648afe36fc26e54b6af1e756c827a447aeb1302d47]
	I0804 02:25:30.950486  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:30.954944  139087 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 02:25:30.955027  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 02:25:30.991796  139087 cri.go:89] found id: ""
	I0804 02:25:30.991837  139087 logs.go:276] 0 containers: []
	W0804 02:25:30.991850  139087 logs.go:278] No container was found matching "kindnet"
	I0804 02:25:30.991869  139087 logs.go:123] Gathering logs for dmesg ...
	I0804 02:25:30.991889  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 02:25:31.012317  139087 logs.go:123] Gathering logs for kube-apiserver [f19867f7c71514444768449b03ae2c858292a22849ed7ce08dc65b4e69e038ac] ...
	I0804 02:25:31.012366  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f19867f7c71514444768449b03ae2c858292a22849ed7ce08dc65b4e69e038ac"
	I0804 02:25:31.058260  139087 logs.go:123] Gathering logs for kube-controller-manager [ef489f766399031aaff727648afe36fc26e54b6af1e756c827a447aeb1302d47] ...
	I0804 02:25:31.058294  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef489f766399031aaff727648afe36fc26e54b6af1e756c827a447aeb1302d47"
	I0804 02:25:31.131502  139087 logs.go:123] Gathering logs for CRI-O ...
	I0804 02:25:31.131564  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 02:25:31.523958  139087 logs.go:123] Gathering logs for kubelet ...
	I0804 02:25:31.523998  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 02:25:31.642968  139087 logs.go:123] Gathering logs for etcd [9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8] ...
	I0804 02:25:31.643014  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8"
	I0804 02:25:31.691228  139087 logs.go:123] Gathering logs for coredns [bc5102abb9d99d7952bfee28010e407182cd721a0312dbe0caab6909eabcabc1] ...
	I0804 02:25:31.691264  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5102abb9d99d7952bfee28010e407182cd721a0312dbe0caab6909eabcabc1"
	I0804 02:25:31.731585  139087 logs.go:123] Gathering logs for kube-scheduler [f80a0c7fd18ce6c3d13ff4ecebced4566ed5736a42b88a752f0356303e3ab05c] ...
	I0804 02:25:31.731631  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f80a0c7fd18ce6c3d13ff4ecebced4566ed5736a42b88a752f0356303e3ab05c"
	I0804 02:25:31.803748  139087 logs.go:123] Gathering logs for kube-scheduler [1b40e4634e64ff938887afe15c4a849baece9f0c98e7014281801fd04ecf0a45] ...
	I0804 02:25:31.803794  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b40e4634e64ff938887afe15c4a849baece9f0c98e7014281801fd04ecf0a45"
	I0804 02:25:30.350230  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:30.350640  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:30.350659  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:30.350597  144674 retry.go:31] will retry after 2.141264028s: waiting for machine to come up
	I0804 02:25:32.494750  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:32.495333  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:32.495345  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:32.495260  144674 retry.go:31] will retry after 2.35627147s: waiting for machine to come up
	I0804 02:25:33.098539  144246 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 20d0ccdd6c8437b8cff9e50ce807fe24a63fc50a8e4676bea26fbac1ccc6256a 100dc524123df6aa314ead844ef02d3cea38b93073985c20d7fc44e6468abcbe 5cff635182c2e896ed3f71019aaa3298ad5f8c3f43fc83723fb8ae540102ee76 fbe012bb9065dc604e3514ecc29055cc3fec0cfce70b4abeec26407ed9bb9564 c1b9a3859d9b8245c15abcd85c2ebc6890c47da54b56a367b3f1d1efa35a5233 515be9a07266dcae0e2cd9b84400e3bba899825e096f5df85d522f4d0c618e66 1dda35fb48cfb6837635fe6e7a61bc7fa89b738fcff2ccb87572d627822f9a04 ea252cfe7f10da5d120ca062f069fc34d2bbb6788c83ee2aa97063e99ef0cd5b 97e291ac32ba4306da5f712eb33a42f837ddcd0e239ff7cf17239394af2f8bc4 d326d3a431a11323b5bed0cce1184bb1aa389493903e6c80ec5c7d6ea806e41e fba10a304541d576a5066061f5fbfa0f5e21ee75411137fe173388c65e1769f8 cec78f4dc50d70d1b63454ccbc5ad2f148c64ab77b2c3aade9eb8ab80aa1cda2 5a0a8ae146fc9cdd72cfcacfa4f69699c1c15fe021d2a242a7405e33e62aeb07 e18d6e04dddebaf9ae17056db86bad323fd66ddc320845c8269171f61b37139c bacb78
e0ab4e6ea491bdf5f3cf21cf4ed5e50d536e123466077dcc86e68afd6b a59ed33ee567b5dbaabc6704f42de6d47e12c2f3701a823e64a4a4d52d3e340a: (25.013861228s)
	W0804 02:25:33.098636  144246 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 20d0ccdd6c8437b8cff9e50ce807fe24a63fc50a8e4676bea26fbac1ccc6256a 100dc524123df6aa314ead844ef02d3cea38b93073985c20d7fc44e6468abcbe 5cff635182c2e896ed3f71019aaa3298ad5f8c3f43fc83723fb8ae540102ee76 fbe012bb9065dc604e3514ecc29055cc3fec0cfce70b4abeec26407ed9bb9564 c1b9a3859d9b8245c15abcd85c2ebc6890c47da54b56a367b3f1d1efa35a5233 515be9a07266dcae0e2cd9b84400e3bba899825e096f5df85d522f4d0c618e66 1dda35fb48cfb6837635fe6e7a61bc7fa89b738fcff2ccb87572d627822f9a04 ea252cfe7f10da5d120ca062f069fc34d2bbb6788c83ee2aa97063e99ef0cd5b 97e291ac32ba4306da5f712eb33a42f837ddcd0e239ff7cf17239394af2f8bc4 d326d3a431a11323b5bed0cce1184bb1aa389493903e6c80ec5c7d6ea806e41e fba10a304541d576a5066061f5fbfa0f5e21ee75411137fe173388c65e1769f8 cec78f4dc50d70d1b63454ccbc5ad2f148c64ab77b2c3aade9eb8ab80aa1cda2 5a0a8ae146fc9cdd72cfcacfa4f69699c1c15fe021d2a242a7405e33e62aeb07 e18d6e
04dddebaf9ae17056db86bad323fd66ddc320845c8269171f61b37139c bacb78e0ab4e6ea491bdf5f3cf21cf4ed5e50d536e123466077dcc86e68afd6b a59ed33ee567b5dbaabc6704f42de6d47e12c2f3701a823e64a4a4d52d3e340a: Process exited with status 1
	stdout:
	20d0ccdd6c8437b8cff9e50ce807fe24a63fc50a8e4676bea26fbac1ccc6256a
	100dc524123df6aa314ead844ef02d3cea38b93073985c20d7fc44e6468abcbe
	5cff635182c2e896ed3f71019aaa3298ad5f8c3f43fc83723fb8ae540102ee76
	fbe012bb9065dc604e3514ecc29055cc3fec0cfce70b4abeec26407ed9bb9564
	c1b9a3859d9b8245c15abcd85c2ebc6890c47da54b56a367b3f1d1efa35a5233
	515be9a07266dcae0e2cd9b84400e3bba899825e096f5df85d522f4d0c618e66
	1dda35fb48cfb6837635fe6e7a61bc7fa89b738fcff2ccb87572d627822f9a04
	ea252cfe7f10da5d120ca062f069fc34d2bbb6788c83ee2aa97063e99ef0cd5b
	
	stderr:
	E0804 02:25:33.052734    3280 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e291ac32ba4306da5f712eb33a42f837ddcd0e239ff7cf17239394af2f8bc4\": container with ID starting with 97e291ac32ba4306da5f712eb33a42f837ddcd0e239ff7cf17239394af2f8bc4 not found: ID does not exist" containerID="97e291ac32ba4306da5f712eb33a42f837ddcd0e239ff7cf17239394af2f8bc4"
	time="2024-08-04T02:25:33Z" level=fatal msg="stopping the container \"97e291ac32ba4306da5f712eb33a42f837ddcd0e239ff7cf17239394af2f8bc4\": rpc error: code = NotFound desc = could not find container \"97e291ac32ba4306da5f712eb33a42f837ddcd0e239ff7cf17239394af2f8bc4\": container with ID starting with 97e291ac32ba4306da5f712eb33a42f837ddcd0e239ff7cf17239394af2f8bc4 not found: ID does not exist"
	I0804 02:25:33.098738  144246 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0804 02:25:33.145273  144246 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0804 02:25:33.156394  144246 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Aug  4 02:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Aug  4 02:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Aug  4 02:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Aug  4 02:24 /etc/kubernetes/scheduler.conf
	
	I0804 02:25:33.156482  144246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0804 02:25:33.166684  144246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0804 02:25:33.175804  144246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0804 02:25:33.184962  144246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 02:25:33.185042  144246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0804 02:25:33.194931  144246 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0804 02:25:33.204308  144246 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0804 02:25:33.204380  144246 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0804 02:25:33.214414  144246 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0804 02:25:33.224791  144246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 02:25:33.285565  144246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 02:25:34.181934  144246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0804 02:25:34.439208  144246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 02:25:34.523651  144246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0804 02:25:34.666165  144246 api_server.go:52] waiting for apiserver process to appear ...
	I0804 02:25:34.666276  144246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 02:25:35.166372  144246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 02:25:35.667181  144246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 02:25:35.687438  144246 api_server.go:72] duration metric: took 1.021270149s to wait for apiserver process to appear ...
	I0804 02:25:35.687470  144246 api_server.go:88] waiting for apiserver healthz status ...
	I0804 02:25:35.687495  144246 api_server.go:253] Checking apiserver healthz at https://192.168.50.156:8443/healthz ...
	I0804 02:25:31.844980  139087 logs.go:123] Gathering logs for coredns [18ef227d935fcdbf1b538a90e0b1d5dc3f3b9914b0aa637835daf3b7cb9ad438] ...
	I0804 02:25:31.845014  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18ef227d935fcdbf1b538a90e0b1d5dc3f3b9914b0aa637835daf3b7cb9ad438"
	I0804 02:25:31.888545  139087 logs.go:123] Gathering logs for describe nodes ...
	I0804 02:25:31.888581  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0804 02:25:33.185028  139087 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1.296427262s)
	W0804 02:25:33.185070  139087 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46714->127.0.0.1:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:46714->127.0.0.1:8443: read: connection reset by peer
	
	** /stderr **
	I0804 02:25:33.185085  139087 logs.go:123] Gathering logs for kube-apiserver [d91c888a8f20084f26936ceac1b834900400aee97cd480fc9c6207f2116c1adc] ...
	I0804 02:25:33.185099  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d91c888a8f20084f26936ceac1b834900400aee97cd480fc9c6207f2116c1adc"
	I0804 02:25:33.235531  139087 logs.go:123] Gathering logs for kube-proxy [db00066aed9aad1de7b99555f75d015875d199bc34d5199394030739e534a6b5] ...
	I0804 02:25:33.235579  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db00066aed9aad1de7b99555f75d015875d199bc34d5199394030739e534a6b5"
	I0804 02:25:33.275984  139087 logs.go:123] Gathering logs for container status ...
	I0804 02:25:33.276026  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0804 02:25:35.824151  139087 api_server.go:253] Checking apiserver healthz at https://192.168.61.197:8443/healthz ...
	I0804 02:25:35.824929  139087 api_server.go:269] stopped: https://192.168.61.197:8443/healthz: Get "https://192.168.61.197:8443/healthz": dial tcp 192.168.61.197:8443: connect: connection refused
	I0804 02:25:35.824985  139087 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0804 02:25:35.825039  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0804 02:25:35.868057  139087 cri.go:89] found id: "d91c888a8f20084f26936ceac1b834900400aee97cd480fc9c6207f2116c1adc"
	I0804 02:25:35.868094  139087 cri.go:89] found id: ""
	I0804 02:25:35.868107  139087 logs.go:276] 1 containers: [d91c888a8f20084f26936ceac1b834900400aee97cd480fc9c6207f2116c1adc]
	I0804 02:25:35.868180  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:35.872699  139087 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0804 02:25:35.872766  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0804 02:25:35.920578  139087 cri.go:89] found id: "9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8"
	I0804 02:25:35.920600  139087 cri.go:89] found id: ""
	I0804 02:25:35.920609  139087 logs.go:276] 1 containers: [9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8]
	I0804 02:25:35.920663  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:35.925960  139087 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0804 02:25:35.926035  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0804 02:25:35.966857  139087 cri.go:89] found id: "18ef227d935fcdbf1b538a90e0b1d5dc3f3b9914b0aa637835daf3b7cb9ad438"
	I0804 02:25:35.966880  139087 cri.go:89] found id: "bc5102abb9d99d7952bfee28010e407182cd721a0312dbe0caab6909eabcabc1"
	I0804 02:25:35.966884  139087 cri.go:89] found id: ""
	I0804 02:25:35.966893  139087 logs.go:276] 2 containers: [18ef227d935fcdbf1b538a90e0b1d5dc3f3b9914b0aa637835daf3b7cb9ad438 bc5102abb9d99d7952bfee28010e407182cd721a0312dbe0caab6909eabcabc1]
	I0804 02:25:35.966953  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:35.971736  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:35.977003  139087 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0804 02:25:35.977082  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0804 02:25:36.023657  139087 cri.go:89] found id: "f80a0c7fd18ce6c3d13ff4ecebced4566ed5736a42b88a752f0356303e3ab05c"
	I0804 02:25:36.023686  139087 cri.go:89] found id: "1b40e4634e64ff938887afe15c4a849baece9f0c98e7014281801fd04ecf0a45"
	I0804 02:25:36.023694  139087 cri.go:89] found id: ""
	I0804 02:25:36.023704  139087 logs.go:276] 2 containers: [f80a0c7fd18ce6c3d13ff4ecebced4566ed5736a42b88a752f0356303e3ab05c 1b40e4634e64ff938887afe15c4a849baece9f0c98e7014281801fd04ecf0a45]
	I0804 02:25:36.023770  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:36.028604  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:36.033773  139087 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0804 02:25:36.033840  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0804 02:25:36.084188  139087 cri.go:89] found id: "db00066aed9aad1de7b99555f75d015875d199bc34d5199394030739e534a6b5"
	I0804 02:25:36.084212  139087 cri.go:89] found id: ""
	I0804 02:25:36.084220  139087 logs.go:276] 1 containers: [db00066aed9aad1de7b99555f75d015875d199bc34d5199394030739e534a6b5]
	I0804 02:25:36.084270  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:36.088746  139087 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0804 02:25:36.088821  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0804 02:25:36.126749  139087 cri.go:89] found id: "ef489f766399031aaff727648afe36fc26e54b6af1e756c827a447aeb1302d47"
	I0804 02:25:36.126777  139087 cri.go:89] found id: ""
	I0804 02:25:36.126797  139087 logs.go:276] 1 containers: [ef489f766399031aaff727648afe36fc26e54b6af1e756c827a447aeb1302d47]
	I0804 02:25:36.126864  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:25:36.132458  139087 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0804 02:25:36.132538  139087 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0804 02:25:36.174849  139087 cri.go:89] found id: ""
	I0804 02:25:36.174878  139087 logs.go:276] 0 containers: []
	W0804 02:25:36.174889  139087 logs.go:278] No container was found matching "kindnet"
	I0804 02:25:36.174901  139087 logs.go:123] Gathering logs for dmesg ...
	I0804 02:25:36.174917  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0804 02:25:36.190247  139087 logs.go:123] Gathering logs for describe nodes ...
	I0804 02:25:36.190290  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0804 02:25:36.273733  139087 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0804 02:25:36.273754  139087 logs.go:123] Gathering logs for kube-apiserver [d91c888a8f20084f26936ceac1b834900400aee97cd480fc9c6207f2116c1adc] ...
	I0804 02:25:36.273769  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d91c888a8f20084f26936ceac1b834900400aee97cd480fc9c6207f2116c1adc"
	I0804 02:25:36.316724  139087 logs.go:123] Gathering logs for etcd [9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8] ...
	I0804 02:25:36.316769  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8"
	I0804 02:25:36.371802  139087 logs.go:123] Gathering logs for coredns [18ef227d935fcdbf1b538a90e0b1d5dc3f3b9914b0aa637835daf3b7cb9ad438] ...
	I0804 02:25:36.371842  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18ef227d935fcdbf1b538a90e0b1d5dc3f3b9914b0aa637835daf3b7cb9ad438"
	I0804 02:25:36.418627  139087 logs.go:123] Gathering logs for coredns [bc5102abb9d99d7952bfee28010e407182cd721a0312dbe0caab6909eabcabc1] ...
	I0804 02:25:36.418665  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc5102abb9d99d7952bfee28010e407182cd721a0312dbe0caab6909eabcabc1"
	I0804 02:25:36.456538  139087 logs.go:123] Gathering logs for kube-scheduler [1b40e4634e64ff938887afe15c4a849baece9f0c98e7014281801fd04ecf0a45] ...
	I0804 02:25:36.456572  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1b40e4634e64ff938887afe15c4a849baece9f0c98e7014281801fd04ecf0a45"
	I0804 02:25:36.496006  139087 logs.go:123] Gathering logs for kubelet ...
	I0804 02:25:36.496040  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0804 02:25:36.607445  139087 logs.go:123] Gathering logs for kube-scheduler [f80a0c7fd18ce6c3d13ff4ecebced4566ed5736a42b88a752f0356303e3ab05c] ...
	I0804 02:25:36.607497  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f80a0c7fd18ce6c3d13ff4ecebced4566ed5736a42b88a752f0356303e3ab05c"
	I0804 02:25:36.691148  139087 logs.go:123] Gathering logs for kube-proxy [db00066aed9aad1de7b99555f75d015875d199bc34d5199394030739e534a6b5] ...
	I0804 02:25:36.691192  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db00066aed9aad1de7b99555f75d015875d199bc34d5199394030739e534a6b5"
	I0804 02:25:36.729464  139087 logs.go:123] Gathering logs for kube-controller-manager [ef489f766399031aaff727648afe36fc26e54b6af1e756c827a447aeb1302d47] ...
	I0804 02:25:36.729501  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef489f766399031aaff727648afe36fc26e54b6af1e756c827a447aeb1302d47"
	I0804 02:25:36.783329  139087 logs.go:123] Gathering logs for CRI-O ...
	I0804 02:25:36.783370  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0804 02:25:34.852747  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:34.853193  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:34.853209  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:34.853141  144674 retry.go:31] will retry after 2.952547527s: waiting for machine to come up
	I0804 02:25:37.807536  144651 main.go:141] libmachine: (cert-options-933588) DBG | domain cert-options-933588 has defined MAC address 52:54:00:85:d7:20 in network mk-cert-options-933588
	I0804 02:25:37.808204  144651 main.go:141] libmachine: (cert-options-933588) DBG | unable to find current IP address of domain cert-options-933588 in network mk-cert-options-933588
	I0804 02:25:37.808229  144651 main.go:141] libmachine: (cert-options-933588) DBG | I0804 02:25:37.808142  144674 retry.go:31] will retry after 4.655610835s: waiting for machine to come up
	I0804 02:25:38.055122  144246 api_server.go:279] https://192.168.50.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0804 02:25:38.055159  144246 api_server.go:103] status: https://192.168.50.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0804 02:25:38.055172  144246 api_server.go:253] Checking apiserver healthz at https://192.168.50.156:8443/healthz ...
	I0804 02:25:38.131925  144246 api_server.go:279] https://192.168.50.156:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0804 02:25:38.131956  144246 api_server.go:103] status: https://192.168.50.156:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0804 02:25:38.188293  144246 api_server.go:253] Checking apiserver healthz at https://192.168.50.156:8443/healthz ...
	I0804 02:25:38.193669  144246 api_server.go:279] https://192.168.50.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 02:25:38.193704  144246 api_server.go:103] status: https://192.168.50.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 02:25:38.688239  144246 api_server.go:253] Checking apiserver healthz at https://192.168.50.156:8443/healthz ...
	I0804 02:25:38.692718  144246 api_server.go:279] https://192.168.50.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 02:25:38.692749  144246 api_server.go:103] status: https://192.168.50.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 02:25:39.188386  144246 api_server.go:253] Checking apiserver healthz at https://192.168.50.156:8443/healthz ...
	I0804 02:25:39.199980  144246 api_server.go:279] https://192.168.50.156:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0804 02:25:39.200029  144246 api_server.go:103] status: https://192.168.50.156:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0804 02:25:39.688631  144246 api_server.go:253] Checking apiserver healthz at https://192.168.50.156:8443/healthz ...
	I0804 02:25:39.693093  144246 api_server.go:279] https://192.168.50.156:8443/healthz returned 200:
	ok
	I0804 02:25:39.699784  144246 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 02:25:39.699808  144246 api_server.go:131] duration metric: took 4.01232965s to wait for apiserver health ...
	I0804 02:25:39.699820  144246 cni.go:84] Creating CNI manager for ""
	I0804 02:25:39.699828  144246 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 02:25:39.701802  144246 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0804 02:25:39.703070  144246 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0804 02:25:39.715047  144246 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0804 02:25:39.734629  144246 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 02:25:39.745777  144246 system_pods.go:59] 8 kube-system pods found
	I0804 02:25:39.745815  144246 system_pods.go:61] "coredns-6f6b679f8f-fsdlt" [7350e353-eaf2-4d34-b641-8184b89a091c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 02:25:39.745826  144246 system_pods.go:61] "coredns-6f6b679f8f-nnn4n" [d8ebb426-4d43-47a3-a227-afd2c2c9a3e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 02:25:39.745840  144246 system_pods.go:61] "etcd-kubernetes-upgrade-168045" [40ae89f1-c798-4576-8ca8-64ea559fe7c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 02:25:39.745852  144246 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-168045" [428721f3-9f41-427f-8e54-5bdf94719122] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 02:25:39.745867  144246 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-168045" [8dcd736c-7d46-4e7f-9b37-3121e789aa5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 02:25:39.745885  144246 system_pods.go:61] "kube-proxy-ngkjk" [3e6a0c97-1bef-47a2-9923-458a31d52839] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0804 02:25:39.745896  144246 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-168045" [2880c209-09db-4dfe-aca3-0e165d9d7bc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 02:25:39.745907  144246 system_pods.go:61] "storage-provisioner" [3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0804 02:25:39.745920  144246 system_pods.go:74] duration metric: took 11.268043ms to wait for pod list to return data ...
	I0804 02:25:39.745933  144246 node_conditions.go:102] verifying NodePressure condition ...
	I0804 02:25:39.749983  144246 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 02:25:39.750012  144246 node_conditions.go:123] node cpu capacity is 2
	I0804 02:25:39.750025  144246 node_conditions.go:105] duration metric: took 4.082731ms to run NodePressure ...
	I0804 02:25:39.750044  144246 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0804 02:25:40.120650  144246 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0804 02:25:40.133418  144246 ops.go:34] apiserver oom_adj: -16
	I0804 02:25:40.133451  144246 kubeadm.go:597] duration metric: took 32.15436857s to restartPrimaryControlPlane
	I0804 02:25:40.133465  144246 kubeadm.go:394] duration metric: took 32.339987695s to StartCluster
	I0804 02:25:40.133497  144246 settings.go:142] acquiring lock: {Name:mkf532aceb8d8524495256eb01b2b67c117281c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:25:40.133602  144246 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 02:25:40.135158  144246 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/kubeconfig: {Name:mk9db0d5521301bbe44f571d0153ba4b675d0242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:25:40.135480  144246 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.156 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0804 02:25:40.135549  144246 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0804 02:25:40.135635  144246 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-168045"
	I0804 02:25:40.135657  144246 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-168045"
	I0804 02:25:40.135667  144246 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-168045"
	W0804 02:25:40.135677  144246 addons.go:243] addon storage-provisioner should already be in state true
	I0804 02:25:40.135701  144246 config.go:182] Loaded profile config "kubernetes-upgrade-168045": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0804 02:25:40.135706  144246 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-168045"
	I0804 02:25:40.135714  144246 host.go:66] Checking if "kubernetes-upgrade-168045" exists ...
	I0804 02:25:40.136118  144246 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:25:40.136152  144246 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:25:40.136158  144246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:25:40.136188  144246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:25:40.137312  144246 out.go:177] * Verifying Kubernetes components...
	I0804 02:25:40.138760  144246 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 02:25:40.157486  144246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
	I0804 02:25:40.157978  144246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36381
	I0804 02:25:40.158255  144246 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:25:40.158452  144246 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:25:40.158926  144246 main.go:141] libmachine: Using API Version  1
	I0804 02:25:40.158948  144246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:25:40.159126  144246 main.go:141] libmachine: Using API Version  1
	I0804 02:25:40.159151  144246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:25:40.159377  144246 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:25:40.159458  144246 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:25:40.159637  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetState
	I0804 02:25:40.159957  144246 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:25:40.160005  144246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:25:40.163047  144246 kapi.go:59] client config for kubernetes-upgrade-168045: &rest.Config{Host:"https://192.168.50.156:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/client.crt", KeyFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/profiles/kubernetes-upgrade-168045/client.key", CAFile:"/home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0804 02:25:40.163495  144246 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-168045"
	W0804 02:25:40.163517  144246 addons.go:243] addon default-storageclass should already be in state true
	I0804 02:25:40.163549  144246 host.go:66] Checking if "kubernetes-upgrade-168045" exists ...
	I0804 02:25:40.163944  144246 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:25:40.163996  144246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:25:40.177088  144246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0804 02:25:40.177660  144246 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:25:40.178203  144246 main.go:141] libmachine: Using API Version  1
	I0804 02:25:40.178224  144246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:25:40.178657  144246 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:25:40.178880  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetState
	I0804 02:25:40.180513  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:25:40.182695  144246 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0804 02:25:40.184227  144246 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 02:25:40.184248  144246 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0804 02:25:40.184274  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:25:40.184294  144246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33885
	I0804 02:25:40.184911  144246 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:25:40.185660  144246 main.go:141] libmachine: Using API Version  1
	I0804 02:25:40.185680  144246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:25:40.186208  144246 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:25:40.186848  144246 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:25:40.186877  144246 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:25:40.187280  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:25:40.187657  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:25:40.187685  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:25:40.187940  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:25:40.188100  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:25:40.188274  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:25:40.188448  144246 sshutil.go:53] new ssh client: &{IP:192.168.50.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/id_rsa Username:docker}
	I0804 02:25:40.209207  144246 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44801
	I0804 02:25:40.209786  144246 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:25:40.210413  144246 main.go:141] libmachine: Using API Version  1
	I0804 02:25:40.210438  144246 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:25:40.210904  144246 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:25:40.211147  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetState
	I0804 02:25:40.213231  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .DriverName
	I0804 02:25:40.213522  144246 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0804 02:25:40.213543  144246 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0804 02:25:40.213565  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHHostname
	I0804 02:25:40.216322  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:25:40.216745  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:cc:20", ip: ""} in network mk-kubernetes-upgrade-168045: {Iface:virbr2 ExpiryTime:2024-08-04 03:19:30 +0000 UTC Type:0 Mac:52:54:00:dc:cc:20 Iaid: IPaddr:192.168.50.156 Prefix:24 Hostname:kubernetes-upgrade-168045 Clientid:01:52:54:00:dc:cc:20}
	I0804 02:25:40.216774  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | domain kubernetes-upgrade-168045 has defined IP address 192.168.50.156 and MAC address 52:54:00:dc:cc:20 in network mk-kubernetes-upgrade-168045
	I0804 02:25:40.216925  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHPort
	I0804 02:25:40.217135  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHKeyPath
	I0804 02:25:40.217286  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .GetSSHUsername
	I0804 02:25:40.217470  144246 sshutil.go:53] new ssh client: &{IP:192.168.50.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/kubernetes-upgrade-168045/id_rsa Username:docker}
	I0804 02:25:40.351840  144246 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 02:25:40.383176  144246 api_server.go:52] waiting for apiserver process to appear ...
	I0804 02:25:40.383286  144246 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 02:25:40.398359  144246 api_server.go:72] duration metric: took 262.83507ms to wait for apiserver process to appear ...
	I0804 02:25:40.398391  144246 api_server.go:88] waiting for apiserver healthz status ...
	I0804 02:25:40.398426  144246 api_server.go:253] Checking apiserver healthz at https://192.168.50.156:8443/healthz ...
	I0804 02:25:40.404778  144246 api_server.go:279] https://192.168.50.156:8443/healthz returned 200:
	ok
	I0804 02:25:40.405853  144246 api_server.go:141] control plane version: v1.31.0-rc.0
	I0804 02:25:40.405878  144246 api_server.go:131] duration metric: took 7.479361ms to wait for apiserver health ...
	I0804 02:25:40.405888  144246 system_pods.go:43] waiting for kube-system pods to appear ...
	I0804 02:25:40.413690  144246 system_pods.go:59] 8 kube-system pods found
	I0804 02:25:40.413721  144246 system_pods.go:61] "coredns-6f6b679f8f-fsdlt" [7350e353-eaf2-4d34-b641-8184b89a091c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 02:25:40.413731  144246 system_pods.go:61] "coredns-6f6b679f8f-nnn4n" [d8ebb426-4d43-47a3-a227-afd2c2c9a3e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0804 02:25:40.413739  144246 system_pods.go:61] "etcd-kubernetes-upgrade-168045" [40ae89f1-c798-4576-8ca8-64ea559fe7c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0804 02:25:40.413754  144246 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-168045" [428721f3-9f41-427f-8e54-5bdf94719122] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0804 02:25:40.413764  144246 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-168045" [8dcd736c-7d46-4e7f-9b37-3121e789aa5b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0804 02:25:40.413776  144246 system_pods.go:61] "kube-proxy-ngkjk" [3e6a0c97-1bef-47a2-9923-458a31d52839] Running
	I0804 02:25:40.413788  144246 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-168045" [2880c209-09db-4dfe-aca3-0e165d9d7bc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0804 02:25:40.413801  144246 system_pods.go:61] "storage-provisioner" [3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c] Running
	I0804 02:25:40.413810  144246 system_pods.go:74] duration metric: took 7.913639ms to wait for pod list to return data ...
	I0804 02:25:40.413824  144246 kubeadm.go:582] duration metric: took 278.308039ms to wait for: map[apiserver:true system_pods:true]
	I0804 02:25:40.413848  144246 node_conditions.go:102] verifying NodePressure condition ...
	I0804 02:25:40.416829  144246 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0804 02:25:40.416849  144246 node_conditions.go:123] node cpu capacity is 2
	I0804 02:25:40.416860  144246 node_conditions.go:105] duration metric: took 3.005586ms to run NodePressure ...
	I0804 02:25:40.416873  144246 start.go:241] waiting for startup goroutines ...
	I0804 02:25:40.477060  144246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0804 02:25:40.492280  144246 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0804 02:25:41.205380  144246 main.go:141] libmachine: Making call to close driver server
	I0804 02:25:41.205410  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .Close
	I0804 02:25:41.205417  144246 main.go:141] libmachine: Making call to close driver server
	I0804 02:25:41.205439  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .Close
	I0804 02:25:41.205719  144246 main.go:141] libmachine: Successfully made call to close driver server
	I0804 02:25:41.205736  144246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 02:25:41.205746  144246 main.go:141] libmachine: Making call to close driver server
	I0804 02:25:41.205753  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .Close
	I0804 02:25:41.205928  144246 main.go:141] libmachine: Successfully made call to close driver server
	I0804 02:25:41.205948  144246 main.go:141] libmachine: Successfully made call to close driver server
	I0804 02:25:41.205953  144246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 02:25:41.205961  144246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 02:25:41.205964  144246 main.go:141] libmachine: Making call to close driver server
	I0804 02:25:41.206553  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .Close
	I0804 02:25:41.206868  144246 main.go:141] libmachine: Successfully made call to close driver server
	I0804 02:25:41.206884  144246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 02:25:41.206870  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) DBG | Closing plugin on server side
	I0804 02:25:41.216594  144246 main.go:141] libmachine: Making call to close driver server
	I0804 02:25:41.216610  144246 main.go:141] libmachine: (kubernetes-upgrade-168045) Calling .Close
	I0804 02:25:41.216845  144246 main.go:141] libmachine: Successfully made call to close driver server
	I0804 02:25:41.216860  144246 main.go:141] libmachine: Making call to close connection to plugin binary
	I0804 02:25:41.218844  144246 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0804 02:25:41.220165  144246 addons.go:510] duration metric: took 1.084621851s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0804 02:25:41.220196  144246 start.go:246] waiting for cluster config update ...
	I0804 02:25:41.220206  144246 start.go:255] writing updated cluster config ...
	I0804 02:25:41.220442  144246 ssh_runner.go:195] Run: rm -f paused
	I0804 02:25:41.280390  144246 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0804 02:25:41.282463  144246 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-168045" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 04 02:25:41 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:41.982680496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722738341982652008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6defc591-5565-45fd-a599-c6a4ca3d2f41 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:25:41 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:41.983440953Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=519fad22-c376-4b1e-9a61-6226311ecce3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:41 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:41.983497381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=519fad22-c376-4b1e-9a61-6226311ecce3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:41 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:41.983915713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de1309bbe010b647c69d355082b75dd1710afc15188cac24d818d190c341260f,PodSandboxId:65c0c71190820d57a3fde27876bc7b4812fb09f77c0adc1ff5c396f2ef1f8e87,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722738338830821078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fsdlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7350e353-eaf2-4d34-b641-8184b89a091c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddd5436c5ce9e8696b39a3a4c31a87ca0bb86ec868abfa0f0b9b04bc021ebcc,PodSandboxId:f2e645d7b9bc5593bc1ce356bbf232cbd542873406d56834f29e3f395518a133,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722738338869274227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnn4n,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d8ebb426-4d43-47a3-a227-afd2c2c9a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc4b1b00cca5ce2b80ade79bdf86180902b9e2e70a0a0ffd66d7d64821d2cc90,PodSandboxId:36d39d540790c36018a2630e4485eebc0e380269df67816e42c441344f0c1dbe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722738338850651632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae4ac2efba7f1ff3ebb4853a1f5669e321b611a104d454d6174181954fff4c2,PodSandboxId:90e38f83b3198f1578324269647ef3c2d15b67349f61f5d5efb609b67597cd9e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722738338840487872,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngkjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e6a0c97-1bef-47a2-9923-458a31d52839,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20efa37a27c079fdc64c34f9394833a8fc7e4f137795b6a2b50241b068ec0996,PodSandboxId:38b1bdd1a3fb2050d34249c64e544e1a0f3d0de5d4b4575977edb0fcdb33f463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722738335037007623,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d91d2ce4f4e7d90d93d4d9da83f9bc,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b19af63f1fa26d82ab011165ca15d8d5951f0936fe6a78a0c40a2d71dbb8ddc,PodSandboxId:59d3f7341062eb57043d50df310ae9d7dd05c9571c13848617184a7eb8b9b215,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722738335009594976,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a0815468a3d30cfc38aeb24aff1f5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4cf4d4bc57697ea9144f110ba7eff8c568b19822b9b6e8869226e852c16a72,PodSandboxId:a9a67fcc782adc86b6136f858375c2babf539c7515afc59596b3ac53b3101dea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722738335002189284,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8064e6089f2497e2465975590bbd4ad2,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa936be7adbf782c93026ec4362d41564e7b01b5224d7c2ffd76ec91698e79d,PodSandboxId:59134bb1d68345d50159ba273fdd6ff2647d69ee8bca5d3ce0c42e03638af814,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722738335025065453,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61dce04d330e3ac8bba90d4c3ea6d9,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dda35fb48cfb6837635fe6e7a61bc7fa89b738fcff2ccb87572d627822f9a04,PodSandboxId:90e38f83b3198f1578324269647ef3c2d15b67349f61f5d5efb609b67597cd9e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722738305659830086,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngkjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e6a0c97-1bef-47a2-9923-458a31d52839,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d0ccdd6c8437b8cff9e50ce807fe24a63fc50a8e4676bea26fbac1ccc6256a,PodSandboxId:65c0c71190820d57a3fde27876bc7b4812fb09f77c0adc1ff5c396f2ef1f8e87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722738306566119476,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-6f6b679f8f-fsdlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7350e353-eaf2-4d34-b641-8184b89a091c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:100dc524123df6aa314ead844ef02d3cea38b93073985c20d7fc44e6468abcbe,PodSandboxId:f2e645d7b9bc5593bc1ce356bbf232cbd542873406d56834f29e3f395518a133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722738306410800493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnn4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ebb426-4d43-47a3-a227-afd2c2c9a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cff635182c2e896ed3f71019aaa3298ad5f8c3f43fc83723fb8ae540102ee76,PodSandboxId:36d39d540790c36018a2630e4485eebc0e380269df67816e42c441344f0c1dbe,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722738305825573981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe012bb9065dc604e3514ecc29055cc3fec0cfce70b4abeec26407ed9bb9564,PodSandboxId:a9a67fcc782adc86b6136f858375c2babf539c7515afc59596b3ac53b3101dea,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722738305722394632,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8064e6089f2497e2465975590bbd4ad2,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515be9a07266dcae0e2cd9b84400e3bba899825e096f5df85d522f4d0c618e66,PodSandboxId:59134bb1d68345d50159ba273fdd6ff2647d69ee8bca5d3ce0c42e03638af814,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722738305669762762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61dce04d330e3ac8bba90d4c3ea6d9,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b9a3859d9b8245c15abcd85c2ebc6890c47da54b56a367b3f1d1efa35a5233,PodSandboxId:59d3f7341062eb57043d50df310ae9d7dd05c9571c13848617184a7eb8b9b215,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722738305692171356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a0815468a3d30cfc38aeb24aff1f5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea252cfe7f10da5d120ca062f069fc34d2bbb6788c83ee2aa97063e99ef0cd5b,PodSandboxId:38b1bdd1a3fb2050d34249c64e544e1a0f3d0de5d4b4575977edb0fcdb33f463,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722738305588724195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d91d2ce4f4e7d90d93d4d9da83f9bc,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=519fad22-c376-4b1e-9a61-6226311ecce3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.032363825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5680df17-8c9a-4b3d-abf8-01c23246807e name=/runtime.v1.RuntimeService/Version
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.032441762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5680df17-8c9a-4b3d-abf8-01c23246807e name=/runtime.v1.RuntimeService/Version
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.033711652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43221d75-b583-4e7e-92ed-b86c3a0e47c3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.034087366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722738342034063383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43221d75-b583-4e7e-92ed-b86c3a0e47c3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.034967115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b94bff8-2101-452c-9614-b734af253bab name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.035025389Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b94bff8-2101-452c-9614-b734af253bab name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.035464190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de1309bbe010b647c69d355082b75dd1710afc15188cac24d818d190c341260f,PodSandboxId:65c0c71190820d57a3fde27876bc7b4812fb09f77c0adc1ff5c396f2ef1f8e87,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722738338830821078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fsdlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7350e353-eaf2-4d34-b641-8184b89a091c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddd5436c5ce9e8696b39a3a4c31a87ca0bb86ec868abfa0f0b9b04bc021ebcc,PodSandboxId:f2e645d7b9bc5593bc1ce356bbf232cbd542873406d56834f29e3f395518a133,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722738338869274227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnn4n,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d8ebb426-4d43-47a3-a227-afd2c2c9a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc4b1b00cca5ce2b80ade79bdf86180902b9e2e70a0a0ffd66d7d64821d2cc90,PodSandboxId:36d39d540790c36018a2630e4485eebc0e380269df67816e42c441344f0c1dbe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722738338850651632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae4ac2efba7f1ff3ebb4853a1f5669e321b611a104d454d6174181954fff4c2,PodSandboxId:90e38f83b3198f1578324269647ef3c2d15b67349f61f5d5efb609b67597cd9e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722738338840487872,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngkjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e6a0c97-1bef-47a2-9923-458a31d52839,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20efa37a27c079fdc64c34f9394833a8fc7e4f137795b6a2b50241b068ec0996,PodSandboxId:38b1bdd1a3fb2050d34249c64e544e1a0f3d0de5d4b4575977edb0fcdb33f463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722738335037007623,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d91d2ce4f4e7d90d93d4d9da83f9bc,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b19af63f1fa26d82ab011165ca15d8d5951f0936fe6a78a0c40a2d71dbb8ddc,PodSandboxId:59d3f7341062eb57043d50df310ae9d7dd05c9571c13848617184a7eb8b9b215,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722738335009594976,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a0815468a3d30cfc38aeb24aff1f5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4cf4d4bc57697ea9144f110ba7eff8c568b19822b9b6e8869226e852c16a72,PodSandboxId:a9a67fcc782adc86b6136f858375c2babf539c7515afc59596b3ac53b3101dea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722738335002189284,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8064e6089f2497e2465975590bbd4ad2,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa936be7adbf782c93026ec4362d41564e7b01b5224d7c2ffd76ec91698e79d,PodSandboxId:59134bb1d68345d50159ba273fdd6ff2647d69ee8bca5d3ce0c42e03638af814,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722738335025065453,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61dce04d330e3ac8bba90d4c3ea6d9,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dda35fb48cfb6837635fe6e7a61bc7fa89b738fcff2ccb87572d627822f9a04,PodSandboxId:90e38f83b3198f1578324269647ef3c2d15b67349f61f5d5efb609b67597cd9e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722738305659830086,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngkjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e6a0c97-1bef-47a2-9923-458a31d52839,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d0ccdd6c8437b8cff9e50ce807fe24a63fc50a8e4676bea26fbac1ccc6256a,PodSandboxId:65c0c71190820d57a3fde27876bc7b4812fb09f77c0adc1ff5c396f2ef1f8e87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722738306566119476,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-6f6b679f8f-fsdlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7350e353-eaf2-4d34-b641-8184b89a091c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:100dc524123df6aa314ead844ef02d3cea38b93073985c20d7fc44e6468abcbe,PodSandboxId:f2e645d7b9bc5593bc1ce356bbf232cbd542873406d56834f29e3f395518a133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722738306410800493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnn4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ebb426-4d43-47a3-a227-afd2c2c9a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cff635182c2e896ed3f71019aaa3298ad5f8c3f43fc83723fb8ae540102ee76,PodSandboxId:36d39d540790c36018a2630e4485eebc0e380269df67816e42c441344f0c1dbe,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722738305825573981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe012bb9065dc604e3514ecc29055cc3fec0cfce70b4abeec26407ed9bb9564,PodSandboxId:a9a67fcc782adc86b6136f858375c2babf539c7515afc59596b3ac53b3101dea,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722738305722394632,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8064e6089f2497e2465975590bbd4ad2,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515be9a07266dcae0e2cd9b84400e3bba899825e096f5df85d522f4d0c618e66,PodSandboxId:59134bb1d68345d50159ba273fdd6ff2647d69ee8bca5d3ce0c42e03638af814,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722738305669762762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61dce04d330e3ac8bba90d4c3ea6d9,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b9a3859d9b8245c15abcd85c2ebc6890c47da54b56a367b3f1d1efa35a5233,PodSandboxId:59d3f7341062eb57043d50df310ae9d7dd05c9571c13848617184a7eb8b9b215,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722738305692171356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a0815468a3d30cfc38aeb24aff1f5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea252cfe7f10da5d120ca062f069fc34d2bbb6788c83ee2aa97063e99ef0cd5b,PodSandboxId:38b1bdd1a3fb2050d34249c64e544e1a0f3d0de5d4b4575977edb0fcdb33f463,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722738305588724195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d91d2ce4f4e7d90d93d4d9da83f9bc,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b94bff8-2101-452c-9614-b734af253bab name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.083162265Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8102db50-5046-4d1d-aedc-fd00a4be6ac0 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.083294177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8102db50-5046-4d1d-aedc-fd00a4be6ac0 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.084677299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a2ebdb0-feee-4956-a2eb-c7871eeabbdf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.085032225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722738342085010217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a2ebdb0-feee-4956-a2eb-c7871eeabbdf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.085704262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bb7912e-bb52-494d-a149-d7d0551ac228 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.085761263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bb7912e-bb52-494d-a149-d7d0551ac228 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.086748335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de1309bbe010b647c69d355082b75dd1710afc15188cac24d818d190c341260f,PodSandboxId:65c0c71190820d57a3fde27876bc7b4812fb09f77c0adc1ff5c396f2ef1f8e87,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722738338830821078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fsdlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7350e353-eaf2-4d34-b641-8184b89a091c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddd5436c5ce9e8696b39a3a4c31a87ca0bb86ec868abfa0f0b9b04bc021ebcc,PodSandboxId:f2e645d7b9bc5593bc1ce356bbf232cbd542873406d56834f29e3f395518a133,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722738338869274227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnn4n,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d8ebb426-4d43-47a3-a227-afd2c2c9a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc4b1b00cca5ce2b80ade79bdf86180902b9e2e70a0a0ffd66d7d64821d2cc90,PodSandboxId:36d39d540790c36018a2630e4485eebc0e380269df67816e42c441344f0c1dbe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722738338850651632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae4ac2efba7f1ff3ebb4853a1f5669e321b611a104d454d6174181954fff4c2,PodSandboxId:90e38f83b3198f1578324269647ef3c2d15b67349f61f5d5efb609b67597cd9e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722738338840487872,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngkjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e6a0c97-1bef-47a2-9923-458a31d52839,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20efa37a27c079fdc64c34f9394833a8fc7e4f137795b6a2b50241b068ec0996,PodSandboxId:38b1bdd1a3fb2050d34249c64e544e1a0f3d0de5d4b4575977edb0fcdb33f463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722738335037007623,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d91d2ce4f4e7d90d93d4d9da83f9bc,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b19af63f1fa26d82ab011165ca15d8d5951f0936fe6a78a0c40a2d71dbb8ddc,PodSandboxId:59d3f7341062eb57043d50df310ae9d7dd05c9571c13848617184a7eb8b9b215,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722738335009594976,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a0815468a3d30cfc38aeb24aff1f5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4cf4d4bc57697ea9144f110ba7eff8c568b19822b9b6e8869226e852c16a72,PodSandboxId:a9a67fcc782adc86b6136f858375c2babf539c7515afc59596b3ac53b3101dea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722738335002189284,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8064e6089f2497e2465975590bbd4ad2,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa936be7adbf782c93026ec4362d41564e7b01b5224d7c2ffd76ec91698e79d,PodSandboxId:59134bb1d68345d50159ba273fdd6ff2647d69ee8bca5d3ce0c42e03638af814,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722738335025065453,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61dce04d330e3ac8bba90d4c3ea6d9,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dda35fb48cfb6837635fe6e7a61bc7fa89b738fcff2ccb87572d627822f9a04,PodSandboxId:90e38f83b3198f1578324269647ef3c2d15b67349f61f5d5efb609b67597cd9e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722738305659830086,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngkjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e6a0c97-1bef-47a2-9923-458a31d52839,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d0ccdd6c8437b8cff9e50ce807fe24a63fc50a8e4676bea26fbac1ccc6256a,PodSandboxId:65c0c71190820d57a3fde27876bc7b4812fb09f77c0adc1ff5c396f2ef1f8e87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722738306566119476,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-6f6b679f8f-fsdlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7350e353-eaf2-4d34-b641-8184b89a091c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:100dc524123df6aa314ead844ef02d3cea38b93073985c20d7fc44e6468abcbe,PodSandboxId:f2e645d7b9bc5593bc1ce356bbf232cbd542873406d56834f29e3f395518a133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722738306410800493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnn4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ebb426-4d43-47a3-a227-afd2c2c9a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cff635182c2e896ed3f71019aaa3298ad5f8c3f43fc83723fb8ae540102ee76,PodSandboxId:36d39d540790c36018a2630e4485eebc0e380269df67816e42c441344f0c1dbe,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722738305825573981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe012bb9065dc604e3514ecc29055cc3fec0cfce70b4abeec26407ed9bb9564,PodSandboxId:a9a67fcc782adc86b6136f858375c2babf539c7515afc59596b3ac53b3101dea,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722738305722394632,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8064e6089f2497e2465975590bbd4ad2,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515be9a07266dcae0e2cd9b84400e3bba899825e096f5df85d522f4d0c618e66,PodSandboxId:59134bb1d68345d50159ba273fdd6ff2647d69ee8bca5d3ce0c42e03638af814,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722738305669762762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61dce04d330e3ac8bba90d4c3ea6d9,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b9a3859d9b8245c15abcd85c2ebc6890c47da54b56a367b3f1d1efa35a5233,PodSandboxId:59d3f7341062eb57043d50df310ae9d7dd05c9571c13848617184a7eb8b9b215,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722738305692171356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a0815468a3d30cfc38aeb24aff1f5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea252cfe7f10da5d120ca062f069fc34d2bbb6788c83ee2aa97063e99ef0cd5b,PodSandboxId:38b1bdd1a3fb2050d34249c64e544e1a0f3d0de5d4b4575977edb0fcdb33f463,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722738305588724195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d91d2ce4f4e7d90d93d4d9da83f9bc,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bb7912e-bb52-494d-a149-d7d0551ac228 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.132475667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37e7db2e-e538-4349-a4e0-25ccc6b4c9a0 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.132634895Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37e7db2e-e538-4349-a4e0-25ccc6b4c9a0 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.134024637Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8eec1096-3926-4fb1-8984-335a33dcf8ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.134522905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722738342134484522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8eec1096-3926-4fb1-8984-335a33dcf8ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.135537929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=728355e0-e102-4837-8343-4193efa89647 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.135596794Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=728355e0-e102-4837-8343-4193efa89647 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:25:42 kubernetes-upgrade-168045 crio[2302]: time="2024-08-04 02:25:42.135954120Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de1309bbe010b647c69d355082b75dd1710afc15188cac24d818d190c341260f,PodSandboxId:65c0c71190820d57a3fde27876bc7b4812fb09f77c0adc1ff5c396f2ef1f8e87,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722738338830821078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fsdlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7350e353-eaf2-4d34-b641-8184b89a091c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddd5436c5ce9e8696b39a3a4c31a87ca0bb86ec868abfa0f0b9b04bc021ebcc,PodSandboxId:f2e645d7b9bc5593bc1ce356bbf232cbd542873406d56834f29e3f395518a133,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722738338869274227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnn4n,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d8ebb426-4d43-47a3-a227-afd2c2c9a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc4b1b00cca5ce2b80ade79bdf86180902b9e2e70a0a0ffd66d7d64821d2cc90,PodSandboxId:36d39d540790c36018a2630e4485eebc0e380269df67816e42c441344f0c1dbe,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1722738338850651632,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dae4ac2efba7f1ff3ebb4853a1f5669e321b611a104d454d6174181954fff4c2,PodSandboxId:90e38f83b3198f1578324269647ef3c2d15b67349f61f5d5efb609b67597cd9e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_RUNNING,C
reatedAt:1722738338840487872,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngkjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e6a0c97-1bef-47a2-9923-458a31d52839,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20efa37a27c079fdc64c34f9394833a8fc7e4f137795b6a2b50241b068ec0996,PodSandboxId:38b1bdd1a3fb2050d34249c64e544e1a0f3d0de5d4b4575977edb0fcdb33f463,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722738335037007623,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d91d2ce4f4e7d90d93d4d9da83f9bc,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b19af63f1fa26d82ab011165ca15d8d5951f0936fe6a78a0c40a2d71dbb8ddc,PodSandboxId:59d3f7341062eb57043d50df310ae9d7dd05c9571c13848617184a7eb8b9b215,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722738335009594976,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a0815468a3d30cfc38aeb24aff1f5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4cf4d4bc57697ea9144f110ba7eff8c568b19822b9b6e8869226e852c16a72,PodSandboxId:a9a67fcc782adc86b6136f858375c2babf539c7515afc59596b3ac53b3101dea,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722738335002189284,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8064e6089f2497e2465975590bbd4ad2,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fa936be7adbf782c93026ec4362d41564e7b01b5224d7c2ffd76ec91698e79d,PodSandboxId:59134bb1d68345d50159ba273fdd6ff2647d69ee8bca5d3ce0c42e03638af814,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722738335025065453,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61dce04d330e3ac8bba90d4c3ea6d9,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1dda35fb48cfb6837635fe6e7a61bc7fa89b738fcff2ccb87572d627822f9a04,PodSandboxId:90e38f83b3198f1578324269647ef3c2d15b67349f61f5d5efb609b67597cd9e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722738305659830086,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ngkjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e6a0c97-1bef-47a2-9923-458a31d52839,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d0ccdd6c8437b8cff9e50ce807fe24a63fc50a8e4676bea26fbac1ccc6256a,PodSandboxId:65c0c71190820d57a3fde27876bc7b4812fb09f77c0adc1ff5c396f2ef1f8e87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722738306566119476,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-6f6b679f8f-fsdlt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7350e353-eaf2-4d34-b641-8184b89a091c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:100dc524123df6aa314ead844ef02d3cea38b93073985c20d7fc44e6468abcbe,PodSandboxId:f2e645d7b9bc5593bc1ce356bbf232cbd542873406d56834f29e3f395518a133,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722738306410800493,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nnn4n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8ebb426-4d43-47a3-a227-afd2c2c9a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cff635182c2e896ed3f71019aaa3298ad5f8c3f43fc83723fb8ae540102ee76,PodSandboxId:36d39d540790c36018a2630e4485eebc0e380269df67816e42c441344f0c1dbe,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722738305825573981,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe012bb9065dc604e3514ecc29055cc3fec0cfce70b4abeec26407ed9bb9564,PodSandboxId:a9a67fcc782adc86b6136f858375c2babf539c7515afc59596b3ac53b3101dea,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722738305722394632,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8064e6089f2497e2465975590bbd4ad2,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515be9a07266dcae0e2cd9b84400e3bba899825e096f5df85d522f4d0c618e66,PodSandboxId:59134bb1d68345d50159ba273fdd6ff2647d69ee8bca5d3ce0c42e03638af814,Metadata:&ContainerMetadata{Name:k
ube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722738305669762762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db61dce04d330e3ac8bba90d4c3ea6d9,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1b9a3859d9b8245c15abcd85c2ebc6890c47da54b56a367b3f1d1efa35a5233,PodSandboxId:59d3f7341062eb57043d50df310ae9d7dd05c9571c13848617184a7eb8b9b215,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722738305692171356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60a0815468a3d30cfc38aeb24aff1f5c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea252cfe7f10da5d120ca062f069fc34d2bbb6788c83ee2aa97063e99ef0cd5b,PodSandboxId:38b1bdd1a3fb2050d34249c64e544e1a0f3d0de5d4b4575977edb0fcdb33f463,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722738305588724195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-168045,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d91d2ce4f4e7d90d93d4d9da83f9bc,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=728355e0-e102-4837-8343-4193efa89647 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	eddd5436c5ce9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   f2e645d7b9bc5       coredns-6f6b679f8f-nnn4n
	cc4b1b00cca5c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   36d39d540790c       storage-provisioner
	dae4ac2efba7f       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   3 seconds ago       Running             kube-proxy                2                   90e38f83b3198       kube-proxy-ngkjk
	de1309bbe010b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   65c0c71190820       coredns-6f6b679f8f-fsdlt
	20efa37a27c07       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   7 seconds ago       Running             kube-apiserver            2                   38b1bdd1a3fb2       kube-apiserver-kubernetes-upgrade-168045
	9fa936be7adbf       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   7 seconds ago       Running             kube-controller-manager   2                   59134bb1d6834       kube-controller-manager-kubernetes-upgrade-168045
	7b19af63f1fa2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   59d3f7341062e       etcd-kubernetes-upgrade-168045
	ad4cf4d4bc576       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   7 seconds ago       Running             kube-scheduler            2                   a9a67fcc782ad       kube-scheduler-kubernetes-upgrade-168045
	20d0ccdd6c843       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   35 seconds ago      Exited              coredns                   1                   65c0c71190820       coredns-6f6b679f8f-fsdlt
	100dc524123df       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   35 seconds ago      Exited              coredns                   1                   f2e645d7b9bc5       coredns-6f6b679f8f-nnn4n
	5cff635182c2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   36 seconds ago      Exited              storage-provisioner       1                   36d39d540790c       storage-provisioner
	fbe012bb9065d       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   36 seconds ago      Exited              kube-scheduler            1                   a9a67fcc782ad       kube-scheduler-kubernetes-upgrade-168045
	c1b9a3859d9b8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   36 seconds ago      Exited              etcd                      1                   59d3f7341062e       etcd-kubernetes-upgrade-168045
	515be9a07266d       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   36 seconds ago      Exited              kube-controller-manager   1                   59134bb1d6834       kube-controller-manager-kubernetes-upgrade-168045
	1dda35fb48cfb       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   36 seconds ago      Exited              kube-proxy                1                   90e38f83b3198       kube-proxy-ngkjk
	ea252cfe7f10d       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   36 seconds ago      Exited              kube-apiserver            1                   38b1bdd1a3fb2       kube-apiserver-kubernetes-upgrade-168045
	
	
	==> coredns [100dc524123df6aa314ead844ef02d3cea38b93073985c20d7fc44e6468abcbe] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [20d0ccdd6c8437b8cff9e50ce807fe24a63fc50a8e4676bea26fbac1ccc6256a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [de1309bbe010b647c69d355082b75dd1710afc15188cac24d818d190c341260f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [eddd5436c5ce9e8696b39a3a4c31a87ca0bb86ec868abfa0f0b9b04bc021ebcc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-168045
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-168045
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 04 Aug 2024 02:24:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-168045
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 04 Aug 2024 02:25:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 04 Aug 2024 02:25:38 +0000   Sun, 04 Aug 2024 02:24:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 04 Aug 2024 02:25:38 +0000   Sun, 04 Aug 2024 02:24:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 04 Aug 2024 02:25:38 +0000   Sun, 04 Aug 2024 02:24:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 04 Aug 2024 02:25:38 +0000   Sun, 04 Aug 2024 02:24:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.156
	  Hostname:    kubernetes-upgrade-168045
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 75aa6fc5964946b9ae777e99b28f1bc9
	  System UUID:                75aa6fc5-9649-46b9-ae77-7e99b28f1bc9
	  Boot ID:                    743d9551-4957-4db1-96f3-c1ca602cdab2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-fsdlt                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     57s
	  kube-system                 coredns-6f6b679f8f-nnn4n                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-kubernetes-upgrade-168045                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         60s
	  kube-system                 kube-apiserver-kubernetes-upgrade-168045             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-168045    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-proxy-ngkjk                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-kubernetes-upgrade-168045             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 32s                kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 69s)  kubelet          Node kubernetes-upgrade-168045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     68s (x7 over 69s)  kubelet          Node kubernetes-upgrade-168045 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    68s (x8 over 69s)  kubelet          Node kubernetes-upgrade-168045 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           58s                node-controller  Node kubernetes-upgrade-168045 event: Registered Node kubernetes-upgrade-168045 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node kubernetes-upgrade-168045 event: Registered Node kubernetes-upgrade-168045 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-168045 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-168045 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-168045 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-168045 event: Registered Node kubernetes-upgrade-168045 in Controller
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.139884] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.061019] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062133] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.194944] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.118363] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +1.476945] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +6.491643] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +0.061356] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.926958] systemd-fstab-generator[865]: Ignoring "noauto" option for root device
	[ +11.305730] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	[  +0.109299] kauditd_printk_skb: 97 callbacks suppressed
	[ +14.008683] systemd-fstab-generator[2222]: Ignoring "noauto" option for root device
	[  +0.089934] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.072434] systemd-fstab-generator[2234]: Ignoring "noauto" option for root device
	[  +0.192408] systemd-fstab-generator[2248]: Ignoring "noauto" option for root device
	[  +0.148127] systemd-fstab-generator[2260]: Ignoring "noauto" option for root device
	[Aug 4 02:25] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +5.015795] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.662895] systemd-fstab-generator[3133]: Ignoring "noauto" option for root device
	[  +3.531048] kauditd_printk_skb: 118 callbacks suppressed
	[ +23.875618] systemd-fstab-generator[3621]: Ignoring "noauto" option for root device
	[  +5.236113] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.671298] systemd-fstab-generator[4167]: Ignoring "noauto" option for root device
	
	
	==> etcd [7b19af63f1fa26d82ab011165ca15d8d5951f0936fe6a78a0c40a2d71dbb8ddc] <==
	{"level":"info","ts":"2024-08-04T02:25:35.699586Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"891365959a9102de","local-member-id":"8b20476ee1e1bfd0","added-peer-id":"8b20476ee1e1bfd0","added-peer-peer-urls":["https://192.168.50.156:2380"]}
	{"level":"info","ts":"2024-08-04T02:25:35.699720Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"891365959a9102de","local-member-id":"8b20476ee1e1bfd0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T02:25:35.699767Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-04T02:25:35.709612Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T02:25:35.716458Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-04T02:25:35.720536Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8b20476ee1e1bfd0","initial-advertise-peer-urls":["https://192.168.50.156:2380"],"listen-peer-urls":["https://192.168.50.156:2380"],"advertise-client-urls":["https://192.168.50.156:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.156:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-04T02:25:35.722240Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-04T02:25:35.720254Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.156:2380"}
	{"level":"info","ts":"2024-08-04T02:25:35.724269Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.156:2380"}
	{"level":"info","ts":"2024-08-04T02:25:36.650441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-04T02:25:36.650511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-04T02:25:36.650555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 received MsgPreVoteResp from 8b20476ee1e1bfd0 at term 3"}
	{"level":"info","ts":"2024-08-04T02:25:36.650574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 became candidate at term 4"}
	{"level":"info","ts":"2024-08-04T02:25:36.650580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 received MsgVoteResp from 8b20476ee1e1bfd0 at term 4"}
	{"level":"info","ts":"2024-08-04T02:25:36.650588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 became leader at term 4"}
	{"level":"info","ts":"2024-08-04T02:25:36.650596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8b20476ee1e1bfd0 elected leader 8b20476ee1e1bfd0 at term 4"}
	{"level":"info","ts":"2024-08-04T02:25:36.656401Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8b20476ee1e1bfd0","local-member-attributes":"{Name:kubernetes-upgrade-168045 ClientURLs:[https://192.168.50.156:2379]}","request-path":"/0/members/8b20476ee1e1bfd0/attributes","cluster-id":"891365959a9102de","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T02:25:36.656467Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T02:25:36.656781Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T02:25:36.657719Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T02:25:36.658595Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.156:2379"}
	{"level":"info","ts":"2024-08-04T02:25:36.659396Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T02:25:36.660124Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T02:25:36.660289Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T02:25:36.660319Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [c1b9a3859d9b8245c15abcd85c2ebc6890c47da54b56a367b3f1d1efa35a5233] <==
	{"level":"info","ts":"2024-08-04T02:25:08.410407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-04T02:25:08.410423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 received MsgPreVoteResp from 8b20476ee1e1bfd0 at term 2"}
	{"level":"info","ts":"2024-08-04T02:25:08.410435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 became candidate at term 3"}
	{"level":"info","ts":"2024-08-04T02:25:08.410441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 received MsgVoteResp from 8b20476ee1e1bfd0 at term 3"}
	{"level":"info","ts":"2024-08-04T02:25:08.410450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8b20476ee1e1bfd0 became leader at term 3"}
	{"level":"info","ts":"2024-08-04T02:25:08.410457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8b20476ee1e1bfd0 elected leader 8b20476ee1e1bfd0 at term 3"}
	{"level":"info","ts":"2024-08-04T02:25:08.414742Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8b20476ee1e1bfd0","local-member-attributes":"{Name:kubernetes-upgrade-168045 ClientURLs:[https://192.168.50.156:2379]}","request-path":"/0/members/8b20476ee1e1bfd0/attributes","cluster-id":"891365959a9102de","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-04T02:25:08.415052Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T02:25:08.415663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-04T02:25:08.416697Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T02:25:08.417270Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-04T02:25:08.417321Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-04T02:25:08.418034Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.156:2379"}
	{"level":"info","ts":"2024-08-04T02:25:08.418899Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-04T02:25:08.420062Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-04T02:25:22.583117Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-04T02:25:22.583324Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-168045","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.156:2380"],"advertise-client-urls":["https://192.168.50.156:2379"]}
	{"level":"warn","ts":"2024-08-04T02:25:22.583396Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T02:25:22.583483Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T02:25:22.607707Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.156:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-04T02:25:22.608005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.156:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-04T02:25:22.609160Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8b20476ee1e1bfd0","current-leader-member-id":"8b20476ee1e1bfd0"}
	{"level":"info","ts":"2024-08-04T02:25:22.613115Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.156:2380"}
	{"level":"info","ts":"2024-08-04T02:25:22.613340Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.156:2380"}
	{"level":"info","ts":"2024-08-04T02:25:22.613392Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-168045","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.156:2380"],"advertise-client-urls":["https://192.168.50.156:2379"]}
	
	
	==> kernel <==
	 02:25:42 up 1 min,  0 users,  load average: 1.72, 0.58, 0.20
	Linux kubernetes-upgrade-168045 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [20efa37a27c079fdc64c34f9394833a8fc7e4f137795b6a2b50241b068ec0996] <==
	I0804 02:25:38.005712       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0804 02:25:38.143067       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0804 02:25:38.152894       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0804 02:25:38.152970       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0804 02:25:38.153530       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0804 02:25:38.155300       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0804 02:25:38.155350       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0804 02:25:38.155886       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0804 02:25:38.155935       1 shared_informer.go:320] Caches are synced for configmaps
	I0804 02:25:38.157745       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0804 02:25:38.158108       1 aggregator.go:171] initial CRD sync complete...
	I0804 02:25:38.158151       1 autoregister_controller.go:144] Starting autoregister controller
	I0804 02:25:38.158174       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0804 02:25:38.158197       1 cache.go:39] Caches are synced for autoregister controller
	I0804 02:25:38.160371       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0804 02:25:38.160749       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0804 02:25:38.168651       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 02:25:38.168706       1 policy_source.go:224] refreshing policies
	I0804 02:25:38.182663       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0804 02:25:39.024332       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0804 02:25:39.866310       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0804 02:25:39.884289       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0804 02:25:39.920447       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0804 02:25:40.055758       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0804 02:25:40.065828       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [ea252cfe7f10da5d120ca062f069fc34d2bbb6788c83ee2aa97063e99ef0cd5b] <==
	W0804 02:25:31.943419       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.006811       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.009390       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.013743       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.039932       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.049788       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.051464       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.060960       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.076924       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.087710       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.106658       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.138746       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.158967       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.213480       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.242260       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.249981       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.284082       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.347712       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.413580       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.469376       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.535384       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.655502       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.683116       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.705096       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:25:32.903193       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [515be9a07266dcae0e2cd9b84400e3bba899825e096f5df85d522f4d0c618e66] <==
	I0804 02:25:13.346800       1 shared_informer.go:320] Caches are synced for PVC protection
	I0804 02:25:13.350062       1 shared_informer.go:320] Caches are synced for GC
	I0804 02:25:13.355429       1 shared_informer.go:320] Caches are synced for job
	I0804 02:25:13.357754       1 shared_informer.go:320] Caches are synced for HPA
	I0804 02:25:13.375572       1 shared_informer.go:320] Caches are synced for endpoint
	I0804 02:25:13.387328       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0804 02:25:13.387475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="84.22µs"
	I0804 02:25:13.387583       1 shared_informer.go:320] Caches are synced for daemon sets
	I0804 02:25:13.415509       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 02:25:13.418974       1 shared_informer.go:320] Caches are synced for taint
	I0804 02:25:13.419147       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0804 02:25:13.419356       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-168045"
	I0804 02:25:13.419477       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0804 02:25:13.433047       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0804 02:25:13.444656       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 02:25:13.486318       1 shared_informer.go:320] Caches are synced for disruption
	I0804 02:25:13.535754       1 shared_informer.go:320] Caches are synced for attach detach
	I0804 02:25:13.942485       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 02:25:13.942609       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0804 02:25:13.943471       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 02:25:15.404985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="37.346384ms"
	I0804 02:25:15.405307       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="215.97µs"
	I0804 02:25:17.425609       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="55.486µs"
	I0804 02:25:22.532437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="36.721289ms"
	I0804 02:25:22.532525       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="60.509µs"
	
	
	==> kube-controller-manager [9fa936be7adbf782c93026ec4362d41564e7b01b5224d7c2ffd76ec91698e79d] <==
	I0804 02:25:41.442713       1 shared_informer.go:320] Caches are synced for endpoint
	I0804 02:25:41.450369       1 shared_informer.go:320] Caches are synced for HPA
	I0804 02:25:41.452327       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0804 02:25:41.453491       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0804 02:25:41.454654       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0804 02:25:41.455985       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0804 02:25:41.458722       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0804 02:25:41.467102       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0804 02:25:41.467320       1 shared_informer.go:320] Caches are synced for stateful set
	I0804 02:25:41.467338       1 shared_informer.go:320] Caches are synced for disruption
	I0804 02:25:41.467386       1 shared_informer.go:320] Caches are synced for ephemeral
	I0804 02:25:41.467396       1 shared_informer.go:320] Caches are synced for job
	I0804 02:25:41.468417       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0804 02:25:41.471793       1 shared_informer.go:320] Caches are synced for GC
	I0804 02:25:41.472405       1 shared_informer.go:320] Caches are synced for PVC protection
	I0804 02:25:41.491314       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0804 02:25:41.491572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-168045"
	I0804 02:25:41.516751       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0804 02:25:41.533322       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0804 02:25:41.564418       1 shared_informer.go:320] Caches are synced for attach detach
	I0804 02:25:41.627063       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 02:25:41.629863       1 shared_informer.go:320] Caches are synced for resource quota
	I0804 02:25:42.071773       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 02:25:42.110578       1 shared_informer.go:320] Caches are synced for garbage collector
	I0804 02:25:42.110662       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [1dda35fb48cfb6837635fe6e7a61bc7fa89b738fcff2ccb87572d627822f9a04] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0804 02:25:08.212557       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0804 02:25:09.998362       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.156"]
	E0804 02:25:09.998564       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0804 02:25:10.182691       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0804 02:25:10.182750       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 02:25:10.182782       1 server_linux.go:169] "Using iptables Proxier"
	I0804 02:25:10.187417       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0804 02:25:10.187730       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0804 02:25:10.187758       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 02:25:10.193158       1 config.go:197] "Starting service config controller"
	I0804 02:25:10.193258       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 02:25:10.193285       1 config.go:104] "Starting endpoint slice config controller"
	I0804 02:25:10.193298       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 02:25:10.193730       1 config.go:326] "Starting node config controller"
	I0804 02:25:10.193778       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 02:25:10.294380       1 shared_informer.go:320] Caches are synced for node config
	I0804 02:25:10.294530       1 shared_informer.go:320] Caches are synced for service config
	I0804 02:25:10.295030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [dae4ac2efba7f1ff3ebb4853a1f5669e321b611a104d454d6174181954fff4c2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0804 02:25:39.293288       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0804 02:25:39.304671       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.156"]
	E0804 02:25:39.304969       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0804 02:25:39.362630       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0804 02:25:39.362724       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0804 02:25:39.362762       1 server_linux.go:169] "Using iptables Proxier"
	I0804 02:25:39.366455       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0804 02:25:39.366816       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0804 02:25:39.366930       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 02:25:39.368086       1 config.go:197] "Starting service config controller"
	I0804 02:25:39.368172       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0804 02:25:39.368269       1 config.go:104] "Starting endpoint slice config controller"
	I0804 02:25:39.368312       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0804 02:25:39.368775       1 config.go:326] "Starting node config controller"
	I0804 02:25:39.368838       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0804 02:25:39.469256       1 shared_informer.go:320] Caches are synced for service config
	I0804 02:25:39.469273       1 shared_informer.go:320] Caches are synced for node config
	I0804 02:25:39.469287       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ad4cf4d4bc57697ea9144f110ba7eff8c568b19822b9b6e8869226e852c16a72] <==
	I0804 02:25:36.160057       1 serving.go:386] Generated self-signed cert in-memory
	W0804 02:25:38.027276       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 02:25:38.027319       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 02:25:38.027329       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 02:25:38.027335       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 02:25:38.100464       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
	I0804 02:25:38.100504       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 02:25:38.111258       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 02:25:38.116796       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 02:25:38.116834       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 02:25:38.116965       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0804 02:25:38.218003       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fbe012bb9065dc604e3514ecc29055cc3fec0cfce70b4abeec26407ed9bb9564] <==
	I0804 02:25:08.071475       1 serving.go:386] Generated self-signed cert in-memory
	W0804 02:25:09.907599       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0804 02:25:09.907697       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0804 02:25:09.907725       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0804 02:25:09.907805       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0804 02:25:09.999984       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
	I0804 02:25:10.000082       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0804 02:25:10.008628       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0804 02:25:10.010394       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0804 02:25:10.010448       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0804 02:25:10.018296       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 02:25:10.118981       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0804 02:25:22.402788       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0804 02:25:22.403249       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 04 02:25:34 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:34.719766    3628 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/60a0815468a3d30cfc38aeb24aff1f5c-etcd-certs\") pod \"etcd-kubernetes-upgrade-168045\" (UID: \"60a0815468a3d30cfc38aeb24aff1f5c\") " pod="kube-system/etcd-kubernetes-upgrade-168045"
	Aug 04 02:25:34 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:34.719786    3628 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/60a0815468a3d30cfc38aeb24aff1f5c-etcd-data\") pod \"etcd-kubernetes-upgrade-168045\" (UID: \"60a0815468a3d30cfc38aeb24aff1f5c\") " pod="kube-system/etcd-kubernetes-upgrade-168045"
	Aug 04 02:25:34 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:34.901984    3628 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-168045"
	Aug 04 02:25:34 kubernetes-upgrade-168045 kubelet[3628]: E0804 02:25:34.902858    3628 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.156:8443: connect: connection refused" node="kubernetes-upgrade-168045"
	Aug 04 02:25:34 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:34.984424    3628 scope.go:117] "RemoveContainer" containerID="c1b9a3859d9b8245c15abcd85c2ebc6890c47da54b56a367b3f1d1efa35a5233"
	Aug 04 02:25:34 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:34.984633    3628 scope.go:117] "RemoveContainer" containerID="fbe012bb9065dc604e3514ecc29055cc3fec0cfce70b4abeec26407ed9bb9564"
	Aug 04 02:25:34 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:34.986913    3628 scope.go:117] "RemoveContainer" containerID="ea252cfe7f10da5d120ca062f069fc34d2bbb6788c83ee2aa97063e99ef0cd5b"
	Aug 04 02:25:34 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:34.988473    3628 scope.go:117] "RemoveContainer" containerID="515be9a07266dcae0e2cd9b84400e3bba899825e096f5df85d522f4d0c618e66"
	Aug 04 02:25:35 kubernetes-upgrade-168045 kubelet[3628]: E0804 02:25:35.119578    3628 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-168045?timeout=10s\": dial tcp 192.168.50.156:8443: connect: connection refused" interval="800ms"
	Aug 04 02:25:35 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:35.304882    3628 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-168045"
	Aug 04 02:25:35 kubernetes-upgrade-168045 kubelet[3628]: E0804 02:25:35.306250    3628 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.156:8443: connect: connection refused" node="kubernetes-upgrade-168045"
	Aug 04 02:25:36 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:36.108867    3628 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-168045"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.204576    3628 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-168045"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.205022    3628 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-168045"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.205096    3628 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.206159    3628 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.493419    3628 apiserver.go:52] "Watching apiserver"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.514601    3628 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.594551    3628 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a0c97-1bef-47a2-9923-458a31d52839-xtables-lock\") pod \"kube-proxy-ngkjk\" (UID: \"3e6a0c97-1bef-47a2-9923-458a31d52839\") " pod="kube-system/kube-proxy-ngkjk"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.594601    3628 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e6a0c97-1bef-47a2-9923-458a31d52839-lib-modules\") pod \"kube-proxy-ngkjk\" (UID: \"3e6a0c97-1bef-47a2-9923-458a31d52839\") " pod="kube-system/kube-proxy-ngkjk"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.594629    3628 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c-tmp\") pod \"storage-provisioner\" (UID: \"3fc12140-ecd7-43c0-9af2-3c5f7c9c4e1c\") " pod="kube-system/storage-provisioner"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.798376    3628 scope.go:117] "RemoveContainer" containerID="5cff635182c2e896ed3f71019aaa3298ad5f8c3f43fc83723fb8ae540102ee76"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.798896    3628 scope.go:117] "RemoveContainer" containerID="1dda35fb48cfb6837635fe6e7a61bc7fa89b738fcff2ccb87572d627822f9a04"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.799772    3628 scope.go:117] "RemoveContainer" containerID="100dc524123df6aa314ead844ef02d3cea38b93073985c20d7fc44e6468abcbe"
	Aug 04 02:25:38 kubernetes-upgrade-168045 kubelet[3628]: I0804 02:25:38.799885    3628 scope.go:117] "RemoveContainer" containerID="20d0ccdd6c8437b8cff9e50ce807fe24a63fc50a8e4676bea26fbac1ccc6256a"
	
	
	==> storage-provisioner [5cff635182c2e896ed3f71019aaa3298ad5f8c3f43fc83723fb8ae540102ee76] <==
	I0804 02:25:07.008463       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 02:25:10.052747       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 02:25:10.052825       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0804 02:25:10.114379       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0804 02:25:10.115480       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2948c8e-16fa-4822-b94c-1faccd37a2b7", APIVersion:"v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-168045_6021ff7b-c662-4cda-b775-c53800ffe36d became leader
	I0804 02:25:10.115652       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-168045_6021ff7b-c662-4cda-b775-c53800ffe36d!
	I0804 02:25:10.216490       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-168045_6021ff7b-c662-4cda-b775-c53800ffe36d!
	
	
	==> storage-provisioner [cc4b1b00cca5ce2b80ade79bdf86180902b9e2e70a0a0ffd66d7d64821d2cc90] <==
	I0804 02:25:39.058508       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0804 02:25:39.097137       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0804 02:25:39.097191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-168045 -n kubernetes-upgrade-168045
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-168045 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-168045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-168045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-168045: (1.171818676s)
--- FAIL: TestKubernetesUpgrade (418.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (765.39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-141370 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0804 02:21:25.317505   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 02:21:42.266223   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
pause_test.go:92: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-141370 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: signal: killed (12m29.087001715s)

                                                
                                                
-- stdout --
	* [pause-141370] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-141370" primary control-plane node in "pause-141370" cluster
	* Updating the running kvm2 "pause-141370" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 02:21:21.832719  139087 out.go:291] Setting OutFile to fd 1 ...
	I0804 02:21:21.832913  139087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:21:21.832928  139087 out.go:304] Setting ErrFile to fd 2...
	I0804 02:21:21.832943  139087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:21:21.833421  139087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 02:21:21.834265  139087 out.go:298] Setting JSON to false
	I0804 02:21:21.835896  139087 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14626,"bootTime":1722723456,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 02:21:21.836005  139087 start.go:139] virtualization: kvm guest
	I0804 02:21:21.838507  139087 out.go:177] * [pause-141370] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 02:21:21.840063  139087 notify.go:220] Checking for updates...
	I0804 02:21:21.840715  139087 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 02:21:21.842251  139087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 02:21:21.843632  139087 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 02:21:21.845196  139087 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:21:21.846540  139087 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 02:21:21.847892  139087 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 02:21:21.849932  139087 config.go:182] Loaded profile config "pause-141370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 02:21:21.850560  139087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:21:21.850647  139087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:21:21.876774  139087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0804 02:21:21.877270  139087 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:21:21.877958  139087 main.go:141] libmachine: Using API Version  1
	I0804 02:21:21.877984  139087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:21:21.878389  139087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:21:21.878627  139087 main.go:141] libmachine: (pause-141370) Calling .DriverName
	I0804 02:21:21.878913  139087 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 02:21:21.879255  139087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:21:21.879308  139087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:21:21.894784  139087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42391
	I0804 02:21:21.895228  139087 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:21:21.895894  139087 main.go:141] libmachine: Using API Version  1
	I0804 02:21:21.895920  139087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:21:21.896350  139087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:21:21.896568  139087 main.go:141] libmachine: (pause-141370) Calling .DriverName
	I0804 02:21:21.934983  139087 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 02:21:21.936453  139087 start.go:297] selected driver: kvm2
	I0804 02:21:21.936474  139087 start.go:901] validating driver "kvm2" against &{Name:pause-141370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-141370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:21:21.936729  139087 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 02:21:21.937211  139087 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:21:21.937305  139087 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 02:21:21.958299  139087 install.go:137] /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 02:21:21.959186  139087 cni.go:84] Creating CNI manager for ""
	I0804 02:21:21.959205  139087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 02:21:21.959290  139087 start.go:340] cluster config:
	{Name:pause-141370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-141370 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:21:21.959490  139087 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:21:21.961394  139087 out.go:177] * Starting "pause-141370" primary control-plane node in "pause-141370" cluster
	I0804 02:21:21.962646  139087 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 02:21:21.962692  139087 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 02:21:21.962706  139087 cache.go:56] Caching tarball of preloaded images
	I0804 02:21:21.962821  139087 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 02:21:21.962837  139087 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 02:21:21.962988  139087 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/pause-141370/config.json ...
	I0804 02:21:21.963213  139087 start.go:360] acquireMachinesLock for pause-141370: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 02:21:44.715085  139087 start.go:364] duration metric: took 22.751840596s to acquireMachinesLock for "pause-141370"
	I0804 02:21:44.715153  139087 start.go:96] Skipping create...Using existing machine configuration
	I0804 02:21:44.715162  139087 fix.go:54] fixHost starting: 
	I0804 02:21:44.715581  139087 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:21:44.715645  139087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:21:44.733392  139087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0804 02:21:44.733916  139087 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:21:44.734495  139087 main.go:141] libmachine: Using API Version  1
	I0804 02:21:44.734526  139087 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:21:44.734921  139087 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:21:44.735133  139087 main.go:141] libmachine: (pause-141370) Calling .DriverName
	I0804 02:21:44.735293  139087 main.go:141] libmachine: (pause-141370) Calling .GetState
	I0804 02:21:44.737109  139087 fix.go:112] recreateIfNeeded on pause-141370: state=Running err=<nil>
	W0804 02:21:44.737134  139087 fix.go:138] unexpected machine state, will restart: <nil>
	I0804 02:21:44.739194  139087 out.go:177] * Updating the running kvm2 "pause-141370" VM ...
	I0804 02:21:44.740577  139087 machine.go:94] provisionDockerMachine start ...
	I0804 02:21:44.740600  139087 main.go:141] libmachine: (pause-141370) Calling .DriverName
	I0804 02:21:44.740826  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:44.743989  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:44.744509  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:44.744542  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:44.744990  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHPort
	I0804 02:21:44.745156  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:44.745307  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:44.745458  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHUsername
	I0804 02:21:44.745673  139087 main.go:141] libmachine: Using SSH client type: native
	I0804 02:21:44.745986  139087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0804 02:21:44.746008  139087 main.go:141] libmachine: About to run SSH command:
	hostname
	I0804 02:21:44.850991  139087 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-141370
	
	I0804 02:21:44.851027  139087 main.go:141] libmachine: (pause-141370) Calling .GetMachineName
	I0804 02:21:44.851323  139087 buildroot.go:166] provisioning hostname "pause-141370"
	I0804 02:21:44.851353  139087 main.go:141] libmachine: (pause-141370) Calling .GetMachineName
	I0804 02:21:44.851579  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:44.854933  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:44.855334  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:44.855371  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:44.855678  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHPort
	I0804 02:21:44.855896  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:44.856085  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:44.856216  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHUsername
	I0804 02:21:44.856416  139087 main.go:141] libmachine: Using SSH client type: native
	I0804 02:21:44.856642  139087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0804 02:21:44.856657  139087 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-141370 && echo "pause-141370" | sudo tee /etc/hostname
	I0804 02:21:44.989402  139087 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-141370
	
	I0804 02:21:44.989442  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:44.992629  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:44.993046  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:44.993079  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:44.993257  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHPort
	I0804 02:21:44.993464  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:44.993645  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:44.993788  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHUsername
	I0804 02:21:44.994043  139087 main.go:141] libmachine: Using SSH client type: native
	I0804 02:21:44.994251  139087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0804 02:21:44.994273  139087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-141370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-141370/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-141370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0804 02:21:45.110098  139087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0804 02:21:45.110134  139087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19364-90243/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-90243/.minikube}
	I0804 02:21:45.110166  139087 buildroot.go:174] setting up certificates
	I0804 02:21:45.110179  139087 provision.go:84] configureAuth start
	I0804 02:21:45.110192  139087 main.go:141] libmachine: (pause-141370) Calling .GetMachineName
	I0804 02:21:45.110477  139087 main.go:141] libmachine: (pause-141370) Calling .GetIP
	I0804 02:21:45.113281  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:45.113626  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:45.113655  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:45.113851  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:45.116500  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:45.116866  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:45.116910  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:45.117038  139087 provision.go:143] copyHostCerts
	I0804 02:21:45.117104  139087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem, removing ...
	I0804 02:21:45.117121  139087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem
	I0804 02:21:45.117189  139087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/ca.pem (1082 bytes)
	I0804 02:21:45.117308  139087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem, removing ...
	I0804 02:21:45.117320  139087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem
	I0804 02:21:45.117373  139087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/cert.pem (1123 bytes)
	I0804 02:21:45.117457  139087 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem, removing ...
	I0804 02:21:45.117467  139087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem
	I0804 02:21:45.117495  139087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-90243/.minikube/key.pem (1679 bytes)
	I0804 02:21:45.117581  139087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem org=jenkins.pause-141370 san=[127.0.0.1 192.168.61.197 localhost minikube pause-141370]
	I0804 02:21:45.369317  139087 provision.go:177] copyRemoteCerts
	I0804 02:21:45.369426  139087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0804 02:21:45.369464  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:45.372125  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:45.372553  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:45.372594  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:45.372788  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHPort
	I0804 02:21:45.372962  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:45.373123  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHUsername
	I0804 02:21:45.373262  139087 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/pause-141370/id_rsa Username:docker}
	I0804 02:21:45.458530  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0804 02:21:45.486931  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0804 02:21:45.513799  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0804 02:21:45.546207  139087 provision.go:87] duration metric: took 436.009943ms to configureAuth
	I0804 02:21:45.546253  139087 buildroot.go:189] setting minikube options for container-runtime
	I0804 02:21:45.546541  139087 config.go:182] Loaded profile config "pause-141370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 02:21:45.546675  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:45.549913  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:45.550348  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:45.550390  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:45.550721  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHPort
	I0804 02:21:45.550949  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:45.551115  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:45.551292  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHUsername
	I0804 02:21:45.551554  139087 main.go:141] libmachine: Using SSH client type: native
	I0804 02:21:45.551783  139087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0804 02:21:45.551805  139087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0804 02:21:51.127544  139087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0804 02:21:51.127576  139087 machine.go:97] duration metric: took 6.386985665s to provisionDockerMachine
	I0804 02:21:51.127592  139087 start.go:293] postStartSetup for "pause-141370" (driver="kvm2")
	I0804 02:21:51.127607  139087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0804 02:21:51.127629  139087 main.go:141] libmachine: (pause-141370) Calling .DriverName
	I0804 02:21:51.127984  139087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0804 02:21:51.128020  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:51.131057  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.131450  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:51.131482  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.131636  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHPort
	I0804 02:21:51.131857  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:51.132013  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHUsername
	I0804 02:21:51.132204  139087 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/pause-141370/id_rsa Username:docker}
	I0804 02:21:51.217328  139087 ssh_runner.go:195] Run: cat /etc/os-release
	I0804 02:21:51.221719  139087 info.go:137] Remote host: Buildroot 2023.02.9
	I0804 02:21:51.221749  139087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/addons for local assets ...
	I0804 02:21:51.221827  139087 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-90243/.minikube/files for local assets ...
	I0804 02:21:51.221944  139087 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem -> 974072.pem in /etc/ssl/certs
	I0804 02:21:51.222083  139087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0804 02:21:51.231639  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /etc/ssl/certs/974072.pem (1708 bytes)
	I0804 02:21:51.267385  139087 start.go:296] duration metric: took 139.773484ms for postStartSetup
	I0804 02:21:51.267443  139087 fix.go:56] duration metric: took 6.552281077s for fixHost
	I0804 02:21:51.267473  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:51.271028  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.271557  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:51.271601  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.271770  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHPort
	I0804 02:21:51.272008  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:51.272280  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:51.272471  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHUsername
	I0804 02:21:51.272665  139087 main.go:141] libmachine: Using SSH client type: native
	I0804 02:21:51.272860  139087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.197 22 <nil> <nil>}
	I0804 02:21:51.272874  139087 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0804 02:21:51.386313  139087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722738111.377450126
	
	I0804 02:21:51.386339  139087 fix.go:216] guest clock: 1722738111.377450126
	I0804 02:21:51.386354  139087 fix.go:229] Guest: 2024-08-04 02:21:51.377450126 +0000 UTC Remote: 2024-08-04 02:21:51.267448639 +0000 UTC m=+29.487108278 (delta=110.001487ms)
	I0804 02:21:51.386383  139087 fix.go:200] guest clock delta is within tolerance: 110.001487ms
	I0804 02:21:51.386389  139087 start.go:83] releasing machines lock for "pause-141370", held for 6.671259973s
	I0804 02:21:51.386413  139087 main.go:141] libmachine: (pause-141370) Calling .DriverName
	I0804 02:21:51.386655  139087 main.go:141] libmachine: (pause-141370) Calling .GetIP
	I0804 02:21:51.389129  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.389569  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:51.389602  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.389762  139087 main.go:141] libmachine: (pause-141370) Calling .DriverName
	I0804 02:21:51.390373  139087 main.go:141] libmachine: (pause-141370) Calling .DriverName
	I0804 02:21:51.390589  139087 main.go:141] libmachine: (pause-141370) Calling .DriverName
	I0804 02:21:51.390683  139087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0804 02:21:51.390747  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:51.390835  139087 ssh_runner.go:195] Run: cat /version.json
	I0804 02:21:51.390859  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHHostname
	I0804 02:21:51.393820  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.394067  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.394274  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:51.394330  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.394425  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHPort
	I0804 02:21:51.394592  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:21:51.394626  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:21:51.394662  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:51.394814  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHUsername
	I0804 02:21:51.394909  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHPort
	I0804 02:21:51.395006  139087 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/pause-141370/id_rsa Username:docker}
	I0804 02:21:51.395046  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHKeyPath
	I0804 02:21:51.395180  139087 main.go:141] libmachine: (pause-141370) Calling .GetSSHUsername
	I0804 02:21:51.395319  139087 sshutil.go:53] new ssh client: &{IP:192.168.61.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/pause-141370/id_rsa Username:docker}
	I0804 02:21:51.527645  139087 ssh_runner.go:195] Run: systemctl --version
	I0804 02:21:51.609517  139087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0804 02:21:52.034125  139087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0804 02:21:52.076323  139087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0804 02:21:52.076406  139087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0804 02:21:52.133727  139087 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0804 02:21:52.133753  139087 start.go:495] detecting cgroup driver to use...
	I0804 02:21:52.133816  139087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0804 02:21:52.228217  139087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0804 02:21:52.272795  139087 docker.go:217] disabling cri-docker service (if available) ...
	I0804 02:21:52.272874  139087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0804 02:21:52.318156  139087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0804 02:21:52.371550  139087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0804 02:21:52.560781  139087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0804 02:21:52.770170  139087 docker.go:233] disabling docker service ...
	I0804 02:21:52.770250  139087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0804 02:21:52.797730  139087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0804 02:21:52.816356  139087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0804 02:21:53.024129  139087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0804 02:21:53.228969  139087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0804 02:21:53.244793  139087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0804 02:21:53.270529  139087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0804 02:21:53.270600  139087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:21:53.282151  139087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0804 02:21:53.282247  139087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:21:53.293798  139087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:21:53.311307  139087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:21:53.323330  139087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0804 02:21:53.335147  139087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:21:53.346661  139087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:21:53.358508  139087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0804 02:21:53.370748  139087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0804 02:21:53.381895  139087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0804 02:21:53.392383  139087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 02:21:53.552802  139087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0804 02:23:24.661260  139087 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m31.108399028s)
	I0804 02:23:24.661298  139087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0804 02:23:24.661378  139087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0804 02:23:24.667654  139087 start.go:563] Will wait 60s for crictl version
	I0804 02:23:24.667750  139087 ssh_runner.go:195] Run: which crictl
	I0804 02:23:24.672196  139087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0804 02:23:24.712729  139087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0804 02:23:24.712818  139087 ssh_runner.go:195] Run: crio --version
	I0804 02:23:24.752039  139087 ssh_runner.go:195] Run: crio --version
	I0804 02:23:24.792156  139087 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0804 02:23:24.793599  139087 main.go:141] libmachine: (pause-141370) Calling .GetIP
	I0804 02:23:24.796995  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:23:24.797196  139087 main.go:141] libmachine: (pause-141370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:05:aa", ip: ""} in network mk-pause-141370: {Iface:virbr3 ExpiryTime:2024-08-04 03:19:56 +0000 UTC Type:0 Mac:52:54:00:44:05:aa Iaid: IPaddr:192.168.61.197 Prefix:24 Hostname:pause-141370 Clientid:01:52:54:00:44:05:aa}
	I0804 02:23:24.797229  139087 main.go:141] libmachine: (pause-141370) DBG | domain pause-141370 has defined IP address 192.168.61.197 and MAC address 52:54:00:44:05:aa in network mk-pause-141370
	I0804 02:23:24.797530  139087 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0804 02:23:24.803241  139087 kubeadm.go:883] updating cluster {Name:pause-141370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-141370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0804 02:23:24.803421  139087 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 02:23:24.803477  139087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 02:23:24.854581  139087 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 02:23:24.854613  139087 crio.go:433] Images already preloaded, skipping extraction
	I0804 02:23:24.854693  139087 ssh_runner.go:195] Run: sudo crictl images --output json
	I0804 02:23:24.890328  139087 crio.go:514] all images are preloaded for cri-o runtime.
	I0804 02:23:24.890358  139087 cache_images.go:84] Images are preloaded, skipping loading
	I0804 02:23:24.890368  139087 kubeadm.go:934] updating node { 192.168.61.197 8443 v1.30.3 crio true true} ...
	I0804 02:23:24.890515  139087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-141370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.197
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-141370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0804 02:23:24.890601  139087 ssh_runner.go:195] Run: crio config
	I0804 02:23:24.943372  139087 cni.go:84] Creating CNI manager for ""
	I0804 02:23:24.943400  139087 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 02:23:24.943417  139087 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0804 02:23:24.943448  139087 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.197 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-141370 NodeName:pause-141370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.197"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.197 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0804 02:23:24.943629  139087 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.197
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-141370"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.197
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.197"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0804 02:23:24.943705  139087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0804 02:23:24.958825  139087 binaries.go:44] Found k8s binaries, skipping transfer
	I0804 02:23:24.958908  139087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0804 02:23:24.969724  139087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0804 02:23:24.990782  139087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0804 02:23:25.011870  139087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0804 02:23:25.032973  139087 ssh_runner.go:195] Run: grep 192.168.61.197	control-plane.minikube.internal$ /etc/hosts
	I0804 02:23:25.038260  139087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0804 02:23:25.205434  139087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0804 02:23:25.231445  139087 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/pause-141370 for IP: 192.168.61.197
	I0804 02:23:25.231472  139087 certs.go:194] generating shared ca certs ...
	I0804 02:23:25.231492  139087 certs.go:226] acquiring lock for ca certs: {Name:mkef7363e08ef5c143c0b2fd074f1acbd9612120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 02:23:25.231670  139087 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key
	I0804 02:23:25.231747  139087 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key
	I0804 02:23:25.231762  139087 certs.go:256] generating profile certs ...
	I0804 02:23:25.231885  139087 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/pause-141370/client.key
	I0804 02:23:25.231960  139087 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/pause-141370/apiserver.key.1e8a2a23
	I0804 02:23:25.232012  139087 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/pause-141370/proxy-client.key
	I0804 02:23:25.232161  139087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem (1338 bytes)
	W0804 02:23:25.232207  139087 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407_empty.pem, impossibly tiny 0 bytes
	I0804 02:23:25.232220  139087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca-key.pem (1675 bytes)
	I0804 02:23:25.232255  139087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/ca.pem (1082 bytes)
	I0804 02:23:25.232286  139087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/cert.pem (1123 bytes)
	I0804 02:23:25.232318  139087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/certs/key.pem (1679 bytes)
	I0804 02:23:25.232377  139087 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem (1708 bytes)
	I0804 02:23:25.233243  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0804 02:23:25.274506  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0804 02:23:25.305035  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0804 02:23:25.334635  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0804 02:23:25.360372  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/pause-141370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0804 02:23:25.390382  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/pause-141370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0804 02:23:25.416890  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/pause-141370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0804 02:23:25.444984  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/pause-141370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0804 02:23:25.472407  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/ssl/certs/974072.pem --> /usr/share/ca-certificates/974072.pem (1708 bytes)
	I0804 02:23:25.498174  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0804 02:23:25.526302  139087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-90243/.minikube/certs/97407.pem --> /usr/share/ca-certificates/97407.pem (1338 bytes)
	I0804 02:23:25.560363  139087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0804 02:23:25.581106  139087 ssh_runner.go:195] Run: openssl version
	I0804 02:23:25.589601  139087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/974072.pem && ln -fs /usr/share/ca-certificates/974072.pem /etc/ssl/certs/974072.pem"
	I0804 02:23:25.605240  139087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/974072.pem
	I0804 02:23:25.611732  139087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  4 01:24 /usr/share/ca-certificates/974072.pem
	I0804 02:23:25.611814  139087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/974072.pem
	I0804 02:23:25.619800  139087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/974072.pem /etc/ssl/certs/3ec20f2e.0"
	I0804 02:23:25.630294  139087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0804 02:23:25.646185  139087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:23:25.652232  139087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  4 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:23:25.652302  139087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0804 02:23:25.668239  139087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0804 02:23:25.773572  139087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97407.pem && ln -fs /usr/share/ca-certificates/97407.pem /etc/ssl/certs/97407.pem"
	I0804 02:23:25.872145  139087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97407.pem
	I0804 02:23:25.911512  139087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  4 01:24 /usr/share/ca-certificates/97407.pem
	I0804 02:23:25.911599  139087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97407.pem
	I0804 02:23:25.954865  139087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/97407.pem /etc/ssl/certs/51391683.0"
	I0804 02:23:26.015551  139087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0804 02:23:26.027281  139087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0804 02:23:26.066102  139087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0804 02:23:26.074877  139087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0804 02:23:26.089132  139087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0804 02:23:26.098960  139087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0804 02:23:26.117708  139087 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0804 02:23:26.125385  139087 kubeadm.go:392] StartCluster: {Name:pause-141370 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-141370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.197 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:23:26.125598  139087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0804 02:23:26.125712  139087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0804 02:23:26.200165  139087 cri.go:89] found id: "bc5102abb9d99d7952bfee28010e407182cd721a0312dbe0caab6909eabcabc1"
	I0804 02:23:26.200197  139087 cri.go:89] found id: "1b40e4634e64ff938887afe15c4a849baece9f0c98e7014281801fd04ecf0a45"
	I0804 02:23:26.200204  139087 cri.go:89] found id: "03f3e75b8adb7d528f41323bb301728f10a1fada96602f992a72ca6cc9dab38e"
	I0804 02:23:26.200208  139087 cri.go:89] found id: "db00066aed9aad1de7b99555f75d015875d199bc34d5199394030739e534a6b5"
	I0804 02:23:26.200212  139087 cri.go:89] found id: "9ea3a14ed4e93bd205556929a00b61cb92e1d73a5ec55ec440bbf2bf7e9eb0d8"
	I0804 02:23:26.200217  139087 cri.go:89] found id: "ef489f766399031aaff727648afe36fc26e54b6af1e756c827a447aeb1302d47"
	I0804 02:23:26.200221  139087 cri.go:89] found id: ""
	I0804 02:23:26.200278  139087 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-linux-amd64 start -p pause-141370 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-141370 -n pause-141370
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-141370 -n pause-141370: exit status 2 (15.234071569s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-141370 logs -n 25
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-168045                           | kubernetes-upgrade-168045 | jenkins | v1.33.1 | 04 Aug 24 02:24 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-168045                           | kubernetes-upgrade-168045 | jenkins | v1.33.1 | 04 Aug 24 02:24 UTC | 04 Aug 24 02:25 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-866998                              | stopped-upgrade-866998    | jenkins | v1.33.1 | 04 Aug 24 02:25 UTC | 04 Aug 24 02:25 UTC |
	| start   | -p cert-options-933588                                 | cert-options-933588       | jenkins | v1.33.1 | 04 Aug 24 02:25 UTC | 04 Aug 24 02:26 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-168045                           | kubernetes-upgrade-168045 | jenkins | v1.33.1 | 04 Aug 24 02:25 UTC | 04 Aug 24 02:25 UTC |
	| start   | -p old-k8s-version-624262                              | old-k8s-version-624262    | jenkins | v1.33.1 | 04 Aug 24 02:25 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | cert-options-933588 ssh                                | cert-options-933588       | jenkins | v1.33.1 | 04 Aug 24 02:26 UTC | 04 Aug 24 02:26 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-933588 -- sudo                         | cert-options-933588       | jenkins | v1.33.1 | 04 Aug 24 02:26 UTC | 04 Aug 24 02:26 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-933588                                 | cert-options-933588       | jenkins | v1.33.1 | 04 Aug 24 02:26 UTC | 04 Aug 24 02:26 UTC |
	| start   | -p no-preload-989117                                   | no-preload-989117         | jenkins | v1.33.1 | 04 Aug 24 02:26 UTC | 04 Aug 24 02:27 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                           |         |         |                     |                     |
	| start   | -p cert-expiration-362636                              | cert-expiration-362636    | jenkins | v1.33.1 | 04 Aug 24 02:27 UTC | 04 Aug 24 02:27 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-362636                              | cert-expiration-362636    | jenkins | v1.33.1 | 04 Aug 24 02:27 UTC | 04 Aug 24 02:27 UTC |
	| start   | -p embed-certs-118541                                  | embed-certs-118541        | jenkins | v1.33.1 | 04 Aug 24 02:27 UTC | 04 Aug 24 02:29 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-989117             | no-preload-989117         | jenkins | v1.33.1 | 04 Aug 24 02:28 UTC | 04 Aug 24 02:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-989117                                   | no-preload-989117         | jenkins | v1.33.1 | 04 Aug 24 02:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-118541            | embed-certs-118541        | jenkins | v1.33.1 | 04 Aug 24 02:29 UTC | 04 Aug 24 02:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-118541                                  | embed-certs-118541        | jenkins | v1.33.1 | 04 Aug 24 02:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-624262        | old-k8s-version-624262    | jenkins | v1.33.1 | 04 Aug 24 02:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-989117                  | no-preload-989117         | jenkins | v1.33.1 | 04 Aug 24 02:30 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-989117                                   | no-preload-989117         | jenkins | v1.33.1 | 04 Aug 24 02:30 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-624262                              | old-k8s-version-624262    | jenkins | v1.33.1 | 04 Aug 24 02:31 UTC | 04 Aug 24 02:31 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-624262             | old-k8s-version-624262    | jenkins | v1.33.1 | 04 Aug 24 02:31 UTC | 04 Aug 24 02:31 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-624262                              | old-k8s-version-624262    | jenkins | v1.33.1 | 04 Aug 24 02:31 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-118541                 | embed-certs-118541        | jenkins | v1.33.1 | 04 Aug 24 02:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-118541                                  | embed-certs-118541        | jenkins | v1.33.1 | 04 Aug 24 02:31 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 02:31:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 02:31:53.262226  148269 out.go:291] Setting OutFile to fd 1 ...
	I0804 02:31:53.262489  148269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:31:53.262499  148269 out.go:304] Setting ErrFile to fd 2...
	I0804 02:31:53.262505  148269 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 02:31:53.262732  148269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 02:31:53.263293  148269 out.go:298] Setting JSON to false
	I0804 02:31:53.264220  148269 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":15257,"bootTime":1722723456,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 02:31:53.264282  148269 start.go:139] virtualization: kvm guest
	I0804 02:31:53.266353  148269 out.go:177] * [embed-certs-118541] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 02:31:53.267844  148269 notify.go:220] Checking for updates...
	I0804 02:31:53.267853  148269 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 02:31:53.269408  148269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 02:31:53.270619  148269 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 02:31:53.272111  148269 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 02:31:53.273509  148269 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 02:31:53.275027  148269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 02:31:53.276727  148269 config.go:182] Loaded profile config "embed-certs-118541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 02:31:53.277148  148269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:31:53.277213  148269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:31:53.292699  148269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36975
	I0804 02:31:53.293206  148269 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:31:53.293846  148269 main.go:141] libmachine: Using API Version  1
	I0804 02:31:53.293879  148269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:31:53.294267  148269 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:31:53.294472  148269 main.go:141] libmachine: (embed-certs-118541) Calling .DriverName
	I0804 02:31:53.294718  148269 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 02:31:53.295053  148269 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2
	I0804 02:31:53.295113  148269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 02:31:53.310396  148269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0804 02:31:53.310912  148269 main.go:141] libmachine: () Calling .GetVersion
	I0804 02:31:53.311512  148269 main.go:141] libmachine: Using API Version  1
	I0804 02:31:53.311535  148269 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 02:31:53.311864  148269 main.go:141] libmachine: () Calling .GetMachineName
	I0804 02:31:53.312160  148269 main.go:141] libmachine: (embed-certs-118541) Calling .DriverName
	I0804 02:31:53.346475  148269 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 02:31:53.347869  148269 start.go:297] selected driver: kvm2
	I0804 02:31:53.347887  148269 start.go:901] validating driver "kvm2" against &{Name:embed-certs-118541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-118541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:31:53.348085  148269 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 02:31:53.348818  148269 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:31:53.348890  148269 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 02:31:53.363978  148269 install.go:137] /home/jenkins/minikube-integration/19364-90243/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0804 02:31:53.364393  148269 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0804 02:31:53.364461  148269 cni.go:84] Creating CNI manager for ""
	I0804 02:31:53.364476  148269 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 02:31:53.364527  148269 start.go:340] cluster config:
	{Name:embed-certs-118541 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-118541 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 02:31:53.364665  148269 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 02:31:53.366473  148269 out.go:177] * Starting "embed-certs-118541" primary control-plane node in "embed-certs-118541" cluster
	I0804 02:31:57.537649  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:31:53.368067  148269 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 02:31:53.368115  148269 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 02:31:53.368125  148269 cache.go:56] Caching tarball of preloaded images
	I0804 02:31:53.368236  148269 preload.go:172] Found /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0804 02:31:53.368251  148269 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0804 02:31:53.368455  148269 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/embed-certs-118541/config.json ...
	I0804 02:31:53.368721  148269 start.go:360] acquireMachinesLock for embed-certs-118541: {Name:mkcf5b61bb8aa93bb0ddc2d3ec075d13cfaaae7f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0804 02:32:00.609658  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:06.689624  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:09.761689  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:15.841623  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:18.913649  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:24.993631  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:28.065644  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:34.145630  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:37.217630  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:43.297626  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:46.369678  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:52.449642  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:32:55.521597  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:01.601650  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:04.673648  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:10.753647  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:13.825659  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:19.905681  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:22.977642  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:29.057636  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:32.129657  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:38.209678  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:41.281624  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:47.361606  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:50.433684  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:56.513616  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	I0804 02:33:59.585571  147658 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.83.248:22: connect: no route to host
	
	
	==> CRI-O <==
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.415926612Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722738846415891054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f8f3bad-1d57-4418-bdfa-c19e06f9e4ca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.416648950Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83b7fc3f-9cd0-4187-a138-6244e7eb5f7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.416703989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83b7fc3f-9cd0-4187-a138-6244e7eb5f7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.416803215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc1a3bd000765f8e761e95d7e4ff33bc0063af22c1f3b84acb6bc0e45f4443f0,PodSandboxId:8f32c50c51180e5daf29a1aa78c5b8287b6c7530f91ba86fc4a4e2fc92ac1b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722738825114091039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be9a386b376751921a7ae38b76a67be,},Annotations:map[string]string{io.kubernetes.container.hash: 7d69dcbf,io.kubernetes.container.restartCount: 15,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270,PodSandboxId:8f32c50c51180e5daf29a1aa78c5b8287b6c7530f91ba86fc4a4e2fc92ac1b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722738774110899995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be9a386b376751921a7ae38b76a67be,},Annotations:map[string]string{io.kubernetes.container.hash: 7d69dcbf,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9972c3d9ac48c109e18f8ecd3ab63258e1ef293ec7f038510e63ecc92300d575,PodSandboxId:2cc42dcfc0a1a12fc5430e2f80ee3ae68abe5c1480ed767740edb1cedc8af9c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722738710768394974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133333bf85e2dc79e4ec8bc934bf0eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83b7fc3f-9cd0-4187-a138-6244e7eb5f7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.455017115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a84b556-c54e-4e9f-b4b1-0a0cb127ddaa name=/runtime.v1.RuntimeService/Version
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.455110989Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a84b556-c54e-4e9f-b4b1-0a0cb127ddaa name=/runtime.v1.RuntimeService/Version
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.456180624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1f0670b-1b7c-4cd6-9540-4b970900cb37 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.456579977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722738846456551667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1f0670b-1b7c-4cd6-9540-4b970900cb37 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.457103615Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc16d43c-f5ff-4b91-b06e-c9de812ccd0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.457176211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc16d43c-f5ff-4b91-b06e-c9de812ccd0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.457291723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc1a3bd000765f8e761e95d7e4ff33bc0063af22c1f3b84acb6bc0e45f4443f0,PodSandboxId:8f32c50c51180e5daf29a1aa78c5b8287b6c7530f91ba86fc4a4e2fc92ac1b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722738825114091039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be9a386b376751921a7ae38b76a67be,},Annotations:map[string]string{io.kubernetes.container.hash: 7d69dcbf,io.kubernetes.container.restartCount: 15,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270,PodSandboxId:8f32c50c51180e5daf29a1aa78c5b8287b6c7530f91ba86fc4a4e2fc92ac1b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722738774110899995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be9a386b376751921a7ae38b76a67be,},Annotations:map[string]string{io.kubernetes.container.hash: 7d69dcbf,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9972c3d9ac48c109e18f8ecd3ab63258e1ef293ec7f038510e63ecc92300d575,PodSandboxId:2cc42dcfc0a1a12fc5430e2f80ee3ae68abe5c1480ed767740edb1cedc8af9c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722738710768394974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133333bf85e2dc79e4ec8bc934bf0eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc16d43c-f5ff-4b91-b06e-c9de812ccd0e name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.488422877Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=821fce1d-c00a-4f8a-bd3d-5aad869bbc2a name=/runtime.v1.RuntimeService/Version
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.488520947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=821fce1d-c00a-4f8a-bd3d-5aad869bbc2a name=/runtime.v1.RuntimeService/Version
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.489642373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f81848ed-e6a2-493f-bbb6-2f3326db24ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.490179855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722738846490140671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f81848ed-e6a2-493f-bbb6-2f3326db24ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.491342152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d870621b-1f40-4478-953e-da3a07581925 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.491394722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d870621b-1f40-4478-953e-da3a07581925 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.491507473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc1a3bd000765f8e761e95d7e4ff33bc0063af22c1f3b84acb6bc0e45f4443f0,PodSandboxId:8f32c50c51180e5daf29a1aa78c5b8287b6c7530f91ba86fc4a4e2fc92ac1b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722738825114091039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be9a386b376751921a7ae38b76a67be,},Annotations:map[string]string{io.kubernetes.container.hash: 7d69dcbf,io.kubernetes.container.restartCount: 15,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270,PodSandboxId:8f32c50c51180e5daf29a1aa78c5b8287b6c7530f91ba86fc4a4e2fc92ac1b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722738774110899995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be9a386b376751921a7ae38b76a67be,},Annotations:map[string]string{io.kubernetes.container.hash: 7d69dcbf,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9972c3d9ac48c109e18f8ecd3ab63258e1ef293ec7f038510e63ecc92300d575,PodSandboxId:2cc42dcfc0a1a12fc5430e2f80ee3ae68abe5c1480ed767740edb1cedc8af9c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722738710768394974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133333bf85e2dc79e4ec8bc934bf0eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d870621b-1f40-4478-953e-da3a07581925 name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.523479137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a1d9c4b-f7f1-45b3-85b0-50906da38779 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.523552928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a1d9c4b-f7f1-45b3-85b0-50906da38779 name=/runtime.v1.RuntimeService/Version
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.524546830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=833ad8a0-6928-4d7e-af1b-e52dd54da57e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.525061011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722738846525031188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=833ad8a0-6928-4d7e-af1b-e52dd54da57e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.525489815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fe14d17-66db-4d5a-8e29-ed967fc4029a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.525591815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fe14d17-66db-4d5a-8e29-ed967fc4029a name=/runtime.v1.RuntimeService/ListContainers
	Aug 04 02:34:06 pause-141370 crio[2844]: time="2024-08-04 02:34:06.525711556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc1a3bd000765f8e761e95d7e4ff33bc0063af22c1f3b84acb6bc0e45f4443f0,PodSandboxId:8f32c50c51180e5daf29a1aa78c5b8287b6c7530f91ba86fc4a4e2fc92ac1b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722738825114091039,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be9a386b376751921a7ae38b76a67be,},Annotations:map[string]string{io.kubernetes.container.hash: 7d69dcbf,io.kubernetes.container.restartCount: 15,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270,PodSandboxId:8f32c50c51180e5daf29a1aa78c5b8287b6c7530f91ba86fc4a4e2fc92ac1b90,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:14,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722738774110899995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0be9a386b376751921a7ae38b76a67be,},Annotations:map[string]string{io.kubernetes.container.hash: 7d69dcbf,io.kubernetes.container.restartCount: 14,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9972c3d9ac48c109e18f8ecd3ab63258e1ef293ec7f038510e63ecc92300d575,PodSandboxId:2cc42dcfc0a1a12fc5430e2f80ee3ae68abe5c1480ed767740edb1cedc8af9c7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722738710768394974,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-141370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 133333bf85e2dc79e4ec8bc934bf0eb6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fe14d17-66db-4d5a-8e29-ed967fc4029a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	fc1a3bd000765       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 seconds ago      Exited              kube-apiserver      15                  8f32c50c51180       kube-apiserver-pause-141370
	9972c3d9ac48c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   2 minutes ago       Running             kube-scheduler      4                   2cc42dcfc0a1a       kube-scheduler-pause-141370
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.197308] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.132590] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.293853] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.526967] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.068673] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.618591] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.533840] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.528766] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.074551] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.863211] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +0.172466] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 4 02:21] kauditd_printk_skb: 67 callbacks suppressed
	[ +32.239915] systemd-fstab-generator[2580]: Ignoring "noauto" option for root device
	[  +0.185913] systemd-fstab-generator[2601]: Ignoring "noauto" option for root device
	[  +0.264589] systemd-fstab-generator[2627]: Ignoring "noauto" option for root device
	[  +0.209323] systemd-fstab-generator[2664]: Ignoring "noauto" option for root device
	[  +0.352056] systemd-fstab-generator[2712]: Ignoring "noauto" option for root device
	[Aug 4 02:23] kauditd_printk_skb: 175 callbacks suppressed
	[  +0.009517] systemd-fstab-generator[2954]: Ignoring "noauto" option for root device
	[  +3.132857] systemd-fstab-generator[3500]: Ignoring "noauto" option for root device
	[ +19.640843] kauditd_printk_skb: 97 callbacks suppressed
	[Aug 4 02:27] systemd-fstab-generator[8846]: Ignoring "noauto" option for root device
	[Aug 4 02:28] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 4 02:31] systemd-fstab-generator[10413]: Ignoring "noauto" option for root device
	[Aug 4 02:32] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> kernel <==
	 02:34:06 up 14 min,  0 users,  load average: 0.02, 0.11, 0.10
	Linux pause-141370 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270] <==
	command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270": Process exited with status 1
	stdout:
	
	stderr:
	E0804 02:34:06.782788   10909 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270\": container with ID starting with e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270 not found: ID does not exist" containerID="e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270"
	time="2024-08-04T02:34:06Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270\": container with ID starting with e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270 not found: ID does not exist"
	
	
	==> kube-apiserver [fc1a3bd000765f8e761e95d7e4ff33bc0063af22c1f3b84acb6bc0e45f4443f0] <==
	I0804 02:33:45.283938       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0804 02:33:45.660379       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:45.660520       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0804 02:33:45.660597       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0804 02:33:45.663977       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0804 02:33:45.675572       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0804 02:33:45.675641       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0804 02:33:45.675928       1 instance.go:299] Using reconciler: lease
	W0804 02:33:45.678658       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:46.661647       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:46.661788       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:46.679551       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:48.125176       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:48.216147       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:48.430221       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:50.777401       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:51.149188       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:51.242721       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:54.476277       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:54.511744       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:54.703754       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:33:59.903369       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:34:00.377651       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0804 02:34:01.398810       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0804 02:34:05.677080       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-scheduler [9972c3d9ac48c109e18f8ecd3ab63258e1ef293ec7f038510e63ecc92300d575] <==
	Trace[446876181]: ---"Objects listed" error:Get "https://192.168.61.197:8443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (02:34:02.422)
	Trace[446876181]: [10.002075169s] [10.002075169s] END
	E0804 02:34:02.422430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.61.197:8443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0804 02:34:02.516371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.61.197:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0804 02:34:02.516430       1 trace.go:236] Trace[1160846334]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Aug-2024 02:33:52.514) (total time: 10001ms):
	Trace[1160846334]: ---"Objects listed" error:Get "https://192.168.61.197:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (02:34:02.516)
	Trace[1160846334]: [10.001703331s] [10.001703331s] END
	E0804 02:34:02.516443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.61.197:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0804 02:34:02.948748       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.61.197:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0804 02:34:02.948813       1 trace.go:236] Trace[1810840537]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Aug-2024 02:33:52.947) (total time: 10001ms):
	Trace[1810840537]: ---"Objects listed" error:Get "https://192.168.61.197:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (02:34:02.948)
	Trace[1810840537]: [10.001665034s] [10.001665034s] END
	E0804 02:34:02.948910       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.197:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	W0804 02:34:06.682954       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.61.197:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59926->192.168.61.197:8443: read: connection reset by peer
	E0804 02:34:06.683060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.61.197:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59926->192.168.61.197:8443: read: connection reset by peer
	W0804 02:34:06.683054       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.61.197:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59920->192.168.61.197:8443: read: connection reset by peer
	E0804 02:34:06.683103       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.197:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59920->192.168.61.197:8443: read: connection reset by peer
	W0804 02:34:06.683219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.197:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59924->192.168.61.197:8443: read: connection reset by peer
	E0804 02:34:06.683244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.61.197:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59924->192.168.61.197:8443: read: connection reset by peer
	W0804 02:34:06.683657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.197:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59942->192.168.61.197:8443: read: connection reset by peer
	W0804 02:34:06.683663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.197:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59932->192.168.61.197:8443: read: connection reset by peer
	E0804 02:34:06.683686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.197:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59942->192.168.61.197:8443: read: connection reset by peer
	E0804 02:34:06.683707       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.61.197:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59932->192.168.61.197:8443: read: connection reset by peer
	W0804 02:34:06.683774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.61.197:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59952->192.168.61.197:8443: read: connection reset by peer
	E0804 02:34:06.683801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.197:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.197:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.61.197:59952->192.168.61.197:8443: read: connection reset by peer
	
	
	==> kubelet <==
	Aug 04 02:33:48 pause-141370 kubelet[10420]: E0804 02:33:48.112467   10420 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-pause-141370_kube-system_db6c248ebf6295778949f18512a13e06_1\" is already in use by aff0355206aa07a3e271cf4a5abd4d830a988a820d184ff6ac1cc27cafa36cb9. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="7410b06de109f7cb462ee831af128f99d2e951e1bd10d126d686d73225dedbe2"
	Aug 04 02:33:48 pause-141370 kubelet[10420]: E0804 02:33:48.112611   10420 kuberuntime_manager.go:1256] container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.12-0,Command:[etcd --advertise-client-urls=https://192.168.61.197:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.61.197:2380 --initial-cluster=pause-141370=https://192.168.61.197:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.61.197:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.61.197:2380 --name=pause-141370 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt --proxy-refresh-interval=7000
0 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?exclude=NOSPACE&serializable=true,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Li
fecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?serializable=false,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-pause-141370_kube-system(db6c248ebf6295778949f18512a13e06): CreateContainerError: the container name "k8s_etcd_etcd-pause-141370_kube-system_db6c248ebf6295778949f18512a13e06_1" is already in use by aff0355206aa07a3e271cf4a5abd4d830a988a820d184ff6ac1cc27cafa36cb9. You have to remove that container to be able to reuse that name: that name is already in use
	Aug 04 02:33:48 pause-141370 kubelet[10420]: E0804 02:33:48.112684   10420 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-pause-141370_kube-system_db6c248ebf6295778949f18512a13e06_1\\\" is already in use by aff0355206aa07a3e271cf4a5abd4d830a988a820d184ff6ac1cc27cafa36cb9. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-pause-141370" podUID="db6c248ebf6295778949f18512a13e06"
	Aug 04 02:33:50 pause-141370 kubelet[10420]: E0804 02:33:50.120736   10420 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 04 02:33:50 pause-141370 kubelet[10420]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 04 02:33:50 pause-141370 kubelet[10420]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 04 02:33:50 pause-141370 kubelet[10420]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 04 02:33:50 pause-141370 kubelet[10420]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 04 02:33:50 pause-141370 kubelet[10420]: E0804 02:33:50.152808   10420 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pause-141370\" not found"
	Aug 04 02:33:55 pause-141370 kubelet[10420]: E0804 02:33:55.958198   10420 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": net/http: TLS handshake timeout" node="pause-141370"
	Aug 04 02:34:00 pause-141370 kubelet[10420]: E0804 02:34:00.069800   10420 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://control-plane.minikube.internal:8443/apis/certificates.k8s.io/v1/certificatesigningrequests": net/http: TLS handshake timeout
	Aug 04 02:34:00 pause-141370 kubelet[10420]: E0804 02:34:00.112757   10420 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-pause-141370_kube-system_db6c248ebf6295778949f18512a13e06_1\" is already in use by aff0355206aa07a3e271cf4a5abd4d830a988a820d184ff6ac1cc27cafa36cb9. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="7410b06de109f7cb462ee831af128f99d2e951e1bd10d126d686d73225dedbe2"
	Aug 04 02:34:00 pause-141370 kubelet[10420]: E0804 02:34:00.113184   10420 kuberuntime_manager.go:1256] container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.12-0,Command:[etcd --advertise-client-urls=https://192.168.61.197:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.61.197:2380 --initial-cluster=pause-141370=https://192.168.61.197:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.61.197:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.61.197:2380 --name=pause-141370 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt --proxy-refresh-interval=7000
0 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?exclude=NOSPACE&serializable=true,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Li
fecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?serializable=false,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-pause-141370_kube-system(db6c248ebf6295778949f18512a13e06): CreateContainerError: the container name "k8s_etcd_etcd-pause-141370_kube-system_db6c248ebf6295778949f18512a13e06_1" is already in use by aff0355206aa07a3e271cf4a5abd4d830a988a820d184ff6ac1cc27cafa36cb9. You have to remove that container to be able to reuse that name: that name is already in use
	Aug 04 02:34:00 pause-141370 kubelet[10420]: E0804 02:34:00.113297   10420 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-pause-141370_kube-system_db6c248ebf6295778949f18512a13e06_1\\\" is already in use by aff0355206aa07a3e271cf4a5abd4d830a988a820d184ff6ac1cc27cafa36cb9. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-pause-141370" podUID="db6c248ebf6295778949f18512a13e06"
	Aug 04 02:34:00 pause-141370 kubelet[10420]: E0804 02:34:00.153895   10420 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"pause-141370\" not found"
	Aug 04 02:34:00 pause-141370 kubelet[10420]: E0804 02:34:00.672009   10420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-141370?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s"
	Aug 04 02:34:01 pause-141370 kubelet[10420]: E0804 02:34:01.871540   10420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{pause-141370.17e865bab6d6d360  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-141370,UID:pause-141370,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:pause-141370,},FirstTimestamp:2024-08-04 02:31:50.07506928 +0000 UTC m=+0.441924761,LastTimestamp:2024-08-04 02:31:50.07506928 +0000 UTC m=+0.441924761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-141370,}"
	Aug 04 02:34:02 pause-141370 kubelet[10420]: E0804 02:34:02.114936   10420 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-pause-141370_kube-system_3c3611d4360ca5575442be4169424b77_1\" is already in use by deaf9590fc146616e6a5d3f8f4111020af838d2609631cdda5099fac3746ae71. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="5201ba12736beb6d0d90a5c5480f4f5c8ce5f213f718eb816155574082e9f40b"
	Aug 04 02:34:02 pause-141370 kubelet[10420]: E0804 02:34:02.115158   10420 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.3,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credenti
als=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-pause-141370_kube-system(3c3611d4360c
a5575442be4169424b77): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-pause-141370_kube-system_3c3611d4360ca5575442be4169424b77_1" is already in use by deaf9590fc146616e6a5d3f8f4111020af838d2609631cdda5099fac3746ae71. You have to remove that container to be able to reuse that name: that name is already in use
	Aug 04 02:34:02 pause-141370 kubelet[10420]: E0804 02:34:02.115254   10420 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-pause-141370_kube-system_3c3611d4360ca5575442be4169424b77_1\\\" is already in use by deaf9590fc146616e6a5d3f8f4111020af838d2609631cdda5099fac3746ae71. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-pause-141370" podUID="3c3611d4360ca5575442be4169424b77"
	Aug 04 02:34:02 pause-141370 kubelet[10420]: I0804 02:34:02.962400   10420 kubelet_node_status.go:73] "Attempting to register node" node="pause-141370"
	Aug 04 02:34:05 pause-141370 kubelet[10420]: E0804 02:34:05.682462   10420 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": read tcp 192.168.61.197:59950->192.168.61.197:8443: read: connection reset by peer" node="pause-141370"
	Aug 04 02:34:06 pause-141370 kubelet[10420]: I0804 02:34:06.551159   10420 scope.go:117] "RemoveContainer" containerID="e70afea3fa13464ff7fb900a8f3d84fc1e824308cdb83bcdd8a535713bf5a270"
	Aug 04 02:34:06 pause-141370 kubelet[10420]: I0804 02:34:06.553170   10420 scope.go:117] "RemoveContainer" containerID="fc1a3bd000765f8e761e95d7e4ff33bc0063af22c1f3b84acb6bc0e45f4443f0"
	Aug 04 02:34:06 pause-141370 kubelet[10420]: E0804 02:34:06.553599   10420 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-pause-141370_kube-system(0be9a386b376751921a7ae38b76a67be)\"" pod="kube-system/kube-apiserver-pause-141370" podUID="0be9a386b376751921a7ae38b76a67be"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-141370 -n pause-141370
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-141370 -n pause-141370: exit status 2 (225.072048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-141370" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (765.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7200.058s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0804 02:41:42.265396   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (20m4s)
	TestStartStop (23m10s)
	TestStartStop/group/default-k8s-diff-port (7m53s)
	TestStartStop/group/default-k8s-diff-port/serial (7m53s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (1m38s)
	TestStartStop/group/embed-certs (14m30s)
	TestStartStop/group/embed-certs/serial (14m30s)
	TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (1m19s)
	TestStartStop/group/no-preload (15m55s)
	TestStartStop/group/no-preload/serial (15m55s)
	TestStartStop/group/no-preload/serial/SecondStart (11m23s)
	TestStartStop/group/old-k8s-version (16m16s)
	TestStartStop/group/old-k8s-version/serial (16m16s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (10m19s)

                                                
                                                
goroutine 2640 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0005daea0, 0xc00075dbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006b8420, {0x49d6100, 0x2b, 0x2b}, {0x26b7039?, 0xc000863b00?, 0x4a92a40?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000820be0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000820be0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000560f00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2300 [chan receive, 11 minutes]:
testing.(*T).Run(0xc001782000, {0x2669a7e?, 0x60400000004?}, 0xc000c64280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001782000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001782000, 0xc0014f6000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1937
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2346 [chan receive, 11 minutes]:
testing.(*T).Run(0xc0021804e0, {0x2669a7e?, 0x60400000004?}, 0xc000c64100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0021804e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0021804e0, 0xc00050c180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1956
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1687 [chan receive, 21 minutes]:
testing.(*T).Run(0xc00029ed00, {0x265c689?, 0x55127c?}, 0xc001e3e138)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00029ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00029ed00, 0x313f960)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 244 [IO wait, 78 minutes]:
internal/poll.runtime_pollWait(0x7fcb2914d538, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000c64000)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000c64000)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0006864a0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0006864a0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0004fc0f0, {0x36b2180, 0xc0006864a0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0004fc0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x592e44?, 0xc000cfe1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 193
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 37 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 36
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2501 [syscall, 11 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x242b9, 0xc0000acab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0018e4ba0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0018e4ba0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c5ac00)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000c5ac00)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001783a00, 0xc000c5ac00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bf160, 0xc000476070}, 0xc001783a00, {0xc0018a4018, 0x16}, {0x0?, 0xc000ce7f60?}, {0x551133?, 0x4a170f?}, {0xc0001fe180, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001783a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001783a00, 0xc000c64280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2300
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2581 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36bf160, 0xc0004b8850}, {0x36b2840, 0xc001df5e00}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36bf160?, 0xc0004a8000?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36bf160, 0xc0004a8000}, 0xc0021fcb60, {0xc000c840d8, 0x12}, {0x26826d2, 0x14}, {0x269a292, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36bf160, 0xc0004a8000}, 0xc0021fcb60, {0xc000c840d8, 0x12}, {0x2669a68?, 0xc001762760?}, {0x551133?, 0x4a170f?}, {0xc00086c600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0021fcb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0021fcb60, 0xc0014f6080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2253
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2134 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc00051f680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00029fd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00029fd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00029fd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00029fd40, 0xc0021ee900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2036
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1936 [chan receive, 23 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0005dad00, 0x313fb80)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1753
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1937 [chan receive, 16 minutes]:
testing.(*T).Run(0xc0005db040, {0x265dc34?, 0x0?}, 0xc0014f6000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0005db040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0005db040, 0xc000217b00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1936
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2351 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0002175d0, 0x2)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001790c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000217640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00474e010, {0x369b2a0, 0xc000914540}, 0x1, 0xc0006a2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00474e010, 0x3b9aca00, 0x0, 0x1, 0xc0006a2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2355
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2573 [select]:
os/exec.(*Cmd).watchCtx(0xc000bc2300, 0xc000cd24e0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2570
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1753 [chan receive, 23 minutes]:
testing.(*T).Run(0xc00029fba0, {0x265c689?, 0x551133?}, 0x313fb80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00029fba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00029fba0, 0x313f9a8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2572 [IO wait]:
internal/poll.runtime_pollWait(0x7fcb2914d158, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001af4660?, 0xc000bb6340?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001af4660, {0xc000bb6340, 0x3cc0, 0x3cc0})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00078a228, {0xc000bb6340?, 0xc000472690?, 0x3e4a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00177a690, {0x3699d40, 0xc0008b4050})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3699e80, 0xc00177a690}, {0x3699d40, 0xc0008b4050}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00078a228?, {0x3699e80, 0xc00177a690})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00078a228, {0x3699e80, 0xc00177a690})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3699e80, 0xc00177a690}, {0x3699da0, 0xc00078a228}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00050d400?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2570
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2560 [IO wait]:
internal/poll.runtime_pollWait(0x7fcb2914d630, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0014f7200?, 0xc000b80000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014f7200, {0xc000b80000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0014f7200, {0xc000b80000?, 0x7fcb28724768?, 0xc001e3f320?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0008b4078, {0xc000b80000?, 0xc000b2a938?, 0x41469b?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001e3f320, {0xc000b80000?, 0x0?, 0xc001e3f320?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0017a2d30, {0x369ba40, 0xc001e3f320})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0017a2a88, {0x369ae20, 0xc0008b4078}, 0xc000b2a980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0017a2a88, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0017a2a88, {0xc000b8b000, 0x1000, 0xc0015cb6c0?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00144bbc0, {0xc00179c4a0, 0x9, 0x4991c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3699f20, 0xc00144bbc0}, {0xc00179c4a0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00179c4a0, 0x9, 0xb2adc0?}, {0x3699f20?, 0xc00144bbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00179c460)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000b2afa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000c5a300)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2559
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2476 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021ec800, 0xc0006a2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2530
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2502 [IO wait]:
internal/poll.runtime_pollWait(0x7fcb2914d348, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00144b380?, 0xc001767b71?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00144b380, {0xc001767b71, 0x48f, 0x48f})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00078a400, {0xc001767b71?, 0x21a4760?, 0x208?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00177abd0, {0x3699d40, 0xc0008b4338})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3699e80, 0xc00177abd0}, {0x3699d40, 0xc0008b4338}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00078a400?, {0x3699e80, 0xc00177abd0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00078a400, {0x3699e80, 0xc00177abd0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3699e80, 0xc00177abd0}, {0x3699da0, 0xc00078a400}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000c64280?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2501
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2503 [IO wait]:
internal/poll.runtime_pollWait(0x7fcb2914d060, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00144b440?, 0xc001d46c70?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00144b440, {0xc001d46c70, 0x15390, 0x15390})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00078a418, {0xc001d46c70?, 0xc000ce3530?, 0x3ff0c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00177ac00, {0x3699d40, 0xc000758d40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3699e80, 0xc00177ac00}, {0x3699d40, 0xc000758d40}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00078a418?, {0x3699e80, 0xc00177ac00})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00078a418, {0x3699e80, 0xc00177ac00})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3699e80, 0xc00177ac00}, {0x3699da0, 0xc00078a418}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001b0d2c0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2501
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 542 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001cbc300, 0xc001b92600)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 541
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2063 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc00051f680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0005da9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0005da9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005da9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0005da9c0, 0xc0014f6380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2036
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 435 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0021ec250, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0021c1260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021ec280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00474d7f0, {0x369b2a0, 0xc004735290}, 0x1, 0xc0006a2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00474d7f0, 0x3b9aca00, 0x0, 0x1, 0xc0006a2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 456
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 673 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b28c00, 0xc001b0cb40)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 320
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 754 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00149d980, 0xc001562000)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 721
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 437 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 436
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1958 [chan receive, 16 minutes]:
testing.(*T).Run(0xc0005db860, {0x265dc34?, 0x0?}, 0xc0001c4a00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0005db860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0005db860, 0xc0019c0580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1936
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1956 [chan receive, 16 minutes]:
testing.(*T).Run(0xc0005db520, {0x265dc34?, 0x0?}, 0xc00050c180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0005db520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0005db520, 0xc000217e80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1936
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2514 [chan receive]:
testing.(*T).Run(0xc0017824e0, {0x2669a7e?, 0x60400000004?}, 0xc0001c4980)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0017824e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0017824e0, 0xc000c64200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1955
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1954 [chan receive, 23 minutes]:
testing.(*testContext).waitParallel(0xc00051f680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0005db1e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0005db1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0005db1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0005db1e0, 0xc000217b40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1936
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2440 [IO wait]:
internal/poll.runtime_pollWait(0x7fcb2914d440, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00144a780?, 0xc000cf2b23?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00144a780, {0xc000cf2b23, 0x4dd, 0x4dd})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00078a298, {0xc000cf2b23?, 0x21a4760?, 0x229?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00177a540, {0x3699d40, 0xc0008b41a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3699e80, 0xc00177a540}, {0x3699d40, 0xc0008b41a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00078a298?, {0x3699e80, 0xc00177a540})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00078a298, {0x3699e80, 0xc00177a540})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3699e80, 0xc00177a540}, {0x3699da0, 0xc00078a298}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000c64100?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2439
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 455 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0021c1380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 345
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 436 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bf320, 0xc0006a2060}, 0xc00212d750, 0xc000b84f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bf320, 0xc0006a2060}, 0x40?, 0xc00212d750, 0xc00212d798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bf320?, 0xc0006a2060?}, 0xc0005da9c0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc000b7cf00?, 0xc001562b40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 456
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 825 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc0016bf0e0)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 822
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 456 [chan receive, 76 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0021ec280, 0xc0006a2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 345
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2253 [chan receive]:
testing.(*T).Run(0xc0021fc9c0, {0x268844e?, 0x60400000004?}, 0xc0014f6080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0021fc9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0021fc9c0, 0xc0001c4a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1958
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1955 [chan receive, 7 minutes]:
testing.(*T).Run(0xc0005db380, {0x265dc34?, 0x0?}, 0xc000c64200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0005db380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0005db380, 0xc000217e40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1936
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 824 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc0016bf0e0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 822
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2036 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc002180340, 0xc001e3e138)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1687
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2441 [IO wait]:
internal/poll.runtime_pollWait(0x7fcb2914cb88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00144a840?, 0xc001a098f6?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00144a840, {0xc001a098f6, 0x1870a, 0x1870a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00078a2c8, {0xc001a098f6?, 0xc001763d30?, 0x1fe61?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00177a5a0, {0x3699d40, 0xc000758cc8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3699e80, 0xc00177a5a0}, {0x3699d40, 0xc000758cc8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00078a2c8?, {0x3699e80, 0xc00177a5a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00078a2c8, {0x3699e80, 0xc00177a5a0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3699e80, 0xc00177a5a0}, {0x3699da0, 0xc00078a2c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001b0c480?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2439
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2064 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc00051f680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0005dbd40)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0005dbd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0005dbd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0005dbd40, 0xc0014f6400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2036
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2442 [select, 11 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c5a480, 0xc001b0c600)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2439
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2037 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc00051f680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002180820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002180820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002180820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002180820, 0xc0001c4800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2036
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2353 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2352
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2475 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0016db200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2530
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2534 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0021ec7d0, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0016db0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0021ec800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000774360, {0x369b2a0, 0xc0008e0060}, 0x1, 0xc0006a2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000774360, 0x3b9aca00, 0x0, 0x1, 0xc0006a2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2476
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2535 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bf320, 0xc0006a2060}, 0xc00212b750, 0xc00212b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bf320, 0xc0006a2060}, 0x16?, 0xc00212b750, 0xc00212b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bf320?, 0xc0006a2060?}, 0xc001783860?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00212b7d0?, 0x592e44?, 0xc000c64200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2476
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2355 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000217640, 0xc0006a2060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2318
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2570 [syscall]:
syscall.Syscall6(0xf7, 0x1, 0x24e2e, 0xc000b26ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0018504e0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0018504e0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000bc2300)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000bc2300)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001782680, 0xc000bc2300)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bf160, 0xc0003c4070}, 0xc001782680, {0xc000058980, 0x1c}, {0x0?, 0xc000094760?}, {0x551133?, 0x4a170f?}, {0xc00086c700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001782680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001782680, 0xc0001c4980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2514
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2065 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc00051f680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021fc000)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021fc000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0021fc000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0021fc000, 0xc0014f6480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2036
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2098 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc00051f680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0021fc1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0021fc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0021fc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0021fc1a0, 0xc0014f6500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2036
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2133 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc00051f680)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00029f6c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00029f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00029f6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00029f6c0, 0xc0021ee880)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2036
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2439 [syscall, 11 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x240ca, 0xc0008e7ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0018e4690)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0018e4690)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c5a480)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000c5a480)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001782820, 0xc000c5a480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36bf160, 0xc000472690}, 0xc001782820, {0xc0021002d0, 0x11}, {0x0?, 0xc00212b760?}, {0x551133?, 0x4a170f?}, {0xc0001cdc00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001782820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001782820, 0xc000c64100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2346
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2354 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001790d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2318
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2571 [IO wait]:
internal/poll.runtime_pollWait(0x7fcb2914d820, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001af45a0?, 0xc00048ba55?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001af45a0, {0xc00048ba55, 0x5ab, 0x5ab})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00078a210, {0xc00048ba55?, 0x21a4760?, 0x213?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00177a660, {0x3699d40, 0xc000758c88})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3699e80, 0xc00177a660}, {0x3699d40, 0xc000758c88}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00078a210?, {0x3699e80, 0xc00177a660})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00078a210, {0x3699e80, 0xc00177a660})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3699e80, 0xc00177a660}, {0x3699da0, 0xc00078a210}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0001c4980?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2570
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2352 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bf320, 0xc0006a2060}, 0xc00175ef50, 0xc0000abf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bf320, 0xc0006a2060}, 0xa0?, 0xc00175ef50, 0xc00175ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bf320?, 0xc0006a2060?}, 0xc0021fc1a0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc000964180?, 0xc001774ba0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2355
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2504 [select, 11 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c5ac00, 0xc001b0d3e0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2501
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2536 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2535
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (169/215)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 29.94
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 13.66
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-rc.0/json-events 20.53
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.57
31 TestOffline 66.36
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
37 TestCertOptions 47.11
38 TestCertExpiration 278.26
40 TestForceSystemdFlag 45.43
41 TestForceSystemdEnv 45.72
43 TestKVMDriverInstallOrUpdate 4.17
47 TestErrorSpam/setup 43.88
48 TestErrorSpam/start 0.35
49 TestErrorSpam/status 0.75
50 TestErrorSpam/pause 1.62
51 TestErrorSpam/unpause 1.6
52 TestErrorSpam/stop 5.24
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 60.07
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 49.27
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 3
64 TestFunctional/serial/CacheCmd/cache/add_local 2.23
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
69 TestFunctional/serial/CacheCmd/cache/delete 0.09
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 34.56
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.52
75 TestFunctional/serial/LogsFileCmd 1.51
76 TestFunctional/serial/InvalidService 4.5
78 TestFunctional/parallel/ConfigCmd 0.32
79 TestFunctional/parallel/DashboardCmd 28.81
80 TestFunctional/parallel/DryRun 0.26
81 TestFunctional/parallel/InternationalLanguage 0.14
82 TestFunctional/parallel/StatusCmd 1.03
86 TestFunctional/parallel/ServiceCmdConnect 11.54
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 50.52
90 TestFunctional/parallel/SSHCmd 0.43
91 TestFunctional/parallel/CpCmd 1.22
92 TestFunctional/parallel/MySQL 27.17
93 TestFunctional/parallel/FileSync 0.21
94 TestFunctional/parallel/CertSync 1.28
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
102 TestFunctional/parallel/License 0.63
103 TestFunctional/parallel/Version/short 0.05
104 TestFunctional/parallel/Version/components 0.79
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
109 TestFunctional/parallel/ImageCommands/ImageBuild 6.96
110 TestFunctional/parallel/ImageCommands/Setup 1.93
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
124 TestFunctional/parallel/ProfileCmd/profile_list 0.28
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
126 TestFunctional/parallel/ServiceCmd/DeployApp 11.16
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.54
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.33
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.86
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.78
134 TestFunctional/parallel/MountCmd/any-port 10.95
135 TestFunctional/parallel/ServiceCmd/List 0.34
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
138 TestFunctional/parallel/ServiceCmd/Format 0.39
139 TestFunctional/parallel/ServiceCmd/URL 0.38
140 TestFunctional/parallel/MountCmd/specific-port 1.69
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.87
142 TestFunctional/delete_echo-server_images 0.04
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestMultiControlPlane/serial/StartCluster 212.56
149 TestMultiControlPlane/serial/DeployApp 6.72
150 TestMultiControlPlane/serial/PingHostFromPods 1.23
151 TestMultiControlPlane/serial/AddWorkerNode 57.53
152 TestMultiControlPlane/serial/NodeLabels 0.07
153 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
154 TestMultiControlPlane/serial/CopyFile 13.07
156 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.51
158 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
160 TestMultiControlPlane/serial/DeleteSecondaryNode 17.41
161 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
163 TestMultiControlPlane/serial/RestartCluster 355.69
164 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
165 TestMultiControlPlane/serial/AddSecondaryNode 80.94
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
170 TestJSONOutput/start/Command 98.05
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.72
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.63
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 7.37
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.19
198 TestMainNoArgs 0.04
199 TestMinikubeProfile 88.97
202 TestMountStart/serial/StartWithMountFirst 27.53
203 TestMountStart/serial/VerifyMountFirst 0.36
204 TestMountStart/serial/StartWithMountSecond 30.35
205 TestMountStart/serial/VerifyMountSecond 0.38
206 TestMountStart/serial/DeleteFirst 0.71
207 TestMountStart/serial/VerifyMountPostDelete 0.38
208 TestMountStart/serial/Stop 1.28
209 TestMountStart/serial/RestartStopped 23.15
210 TestMountStart/serial/VerifyMountPostStop 0.38
213 TestMultiNode/serial/FreshStart2Nodes 125.54
214 TestMultiNode/serial/DeployApp2Nodes 5.43
215 TestMultiNode/serial/PingHostFrom2Pods 0.81
216 TestMultiNode/serial/AddNode 54.25
217 TestMultiNode/serial/MultiNodeLabels 0.07
218 TestMultiNode/serial/ProfileList 0.21
219 TestMultiNode/serial/CopyFile 7.15
220 TestMultiNode/serial/StopNode 2.31
221 TestMultiNode/serial/StartAfterStop 40.79
223 TestMultiNode/serial/DeleteNode 2.22
225 TestMultiNode/serial/RestartMultiNode 181.77
226 TestMultiNode/serial/ValidateNameConflict 44.53
233 TestScheduledStopUnix 113.82
237 TestRunningBinaryUpgrade 190.12
249 TestPause/serial/Start 150.92
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
252 TestNoKubernetes/serial/StartWithK8s 82.48
253 TestNoKubernetes/serial/StartWithStopK8s 5.84
254 TestNoKubernetes/serial/Start 25.43
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
257 TestNoKubernetes/serial/ProfileList 23.19
269 TestNoKubernetes/serial/Stop 1.3
270 TestNoKubernetes/serial/StartNoArgs 39.85
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
272 TestStoppedBinaryUpgrade/Setup 2.62
273 TestStoppedBinaryUpgrade/Upgrade 97.77
274 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
x
+
TestDownloadOnly/v1.20.0/json-events (29.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-290956 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-290956 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (29.937122193s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (29.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-290956
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-290956: exit status 85 (60.26547ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-290956 | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC |          |
	|         | -p download-only-290956        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:42:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:42:01.262300   97419 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:42:01.262570   97419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:42:01.262582   97419 out.go:304] Setting ErrFile to fd 2...
	I0804 00:42:01.262587   97419 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:42:01.262799   97419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	W0804 00:42:01.262933   97419 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19364-90243/.minikube/config/config.json: open /home/jenkins/minikube-integration/19364-90243/.minikube/config/config.json: no such file or directory
	I0804 00:42:01.263545   97419 out.go:298] Setting JSON to true
	I0804 00:42:01.264397   97419 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8665,"bootTime":1722723456,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:42:01.264463   97419 start.go:139] virtualization: kvm guest
	I0804 00:42:01.266889   97419 out.go:97] [download-only-290956] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0804 00:42:01.267027   97419 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball: no such file or directory
	I0804 00:42:01.267062   97419 notify.go:220] Checking for updates...
	I0804 00:42:01.268482   97419 out.go:169] MINIKUBE_LOCATION=19364
	I0804 00:42:01.270019   97419 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:42:01.271392   97419 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 00:42:01.273010   97419 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 00:42:01.274731   97419 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0804 00:42:01.277387   97419 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0804 00:42:01.277614   97419 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:42:01.313586   97419 out.go:97] Using the kvm2 driver based on user configuration
	I0804 00:42:01.313624   97419 start.go:297] selected driver: kvm2
	I0804 00:42:01.313632   97419 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:42:01.314087   97419 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:42:01.314238   97419 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:42:01.330196   97419 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:42:01.330247   97419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:42:01.330728   97419 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0804 00:42:01.330881   97419 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 00:42:01.330909   97419 cni.go:84] Creating CNI manager for ""
	I0804 00:42:01.330917   97419 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:42:01.330927   97419 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:42:01.330975   97419 start.go:340] cluster config:
	{Name:download-only-290956 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-290956 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:42:01.331144   97419 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:42:01.332925   97419 out.go:97] Downloading VM boot image ...
	I0804 00:42:01.332968   97419 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0804 00:42:11.950941   97419 out.go:97] Starting "download-only-290956" primary control-plane node in "download-only-290956" cluster
	I0804 00:42:11.950964   97419 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:42:12.065708   97419 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0804 00:42:12.065746   97419 cache.go:56] Caching tarball of preloaded images
	I0804 00:42:12.065914   97419 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:42:12.067953   97419 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0804 00:42:12.067982   97419 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0804 00:42:12.184021   97419 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0804 00:42:24.939211   97419 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0804 00:42:24.939308   97419 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0804 00:42:25.828549   97419 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0804 00:42:25.828899   97419 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/download-only-290956/config.json ...
	I0804 00:42:25.828929   97419 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/download-only-290956/config.json: {Name:mkb9f9f5d097fb7e5f888b2e9d8d896bc26392ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:42:25.829087   97419 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0804 00:42:25.829249   97419 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-290956 host does not exist
	  To start a cluster, run: "minikube start -p download-only-290956"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-290956
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (13.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-673448 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-673448 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.660928964s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (13.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-673448
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-673448: exit status 85 (57.75071ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-290956 | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC |                     |
	|         | -p download-only-290956        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:42 UTC |
	| delete  | -p download-only-290956        | download-only-290956 | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:42 UTC |
	| start   | -o=json --download-only        | download-only-673448 | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC |                     |
	|         | -p download-only-673448        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:42:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:42:31.512819   97692 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:42:31.512936   97692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:42:31.512947   97692 out.go:304] Setting ErrFile to fd 2...
	I0804 00:42:31.512952   97692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:42:31.513172   97692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 00:42:31.513799   97692 out.go:298] Setting JSON to true
	I0804 00:42:31.514675   97692 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8695,"bootTime":1722723456,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:42:31.514736   97692 start.go:139] virtualization: kvm guest
	I0804 00:42:31.517097   97692 out.go:97] [download-only-673448] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:42:31.517251   97692 notify.go:220] Checking for updates...
	I0804 00:42:31.518567   97692 out.go:169] MINIKUBE_LOCATION=19364
	I0804 00:42:31.520145   97692 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:42:31.521571   97692 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 00:42:31.522890   97692 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 00:42:31.524368   97692 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0804 00:42:31.526809   97692 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0804 00:42:31.527035   97692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:42:31.558160   97692 out.go:97] Using the kvm2 driver based on user configuration
	I0804 00:42:31.558187   97692 start.go:297] selected driver: kvm2
	I0804 00:42:31.558193   97692 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:42:31.558537   97692 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:42:31.558624   97692 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:42:31.573295   97692 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:42:31.573378   97692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:42:31.573884   97692 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0804 00:42:31.574077   97692 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 00:42:31.574151   97692 cni.go:84] Creating CNI manager for ""
	I0804 00:42:31.574170   97692 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:42:31.574181   97692 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:42:31.574256   97692 start.go:340] cluster config:
	{Name:download-only-673448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-673448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:42:31.574382   97692 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:42:31.576126   97692 out.go:97] Starting "download-only-673448" primary control-plane node in "download-only-673448" cluster
	I0804 00:42:31.576161   97692 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:42:32.173674   97692 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0804 00:42:32.173711   97692 cache.go:56] Caching tarball of preloaded images
	I0804 00:42:32.173897   97692 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0804 00:42:32.175761   97692 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0804 00:42:32.175780   97692 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0804 00:42:32.298911   97692 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-673448 host does not exist
	  To start a cluster, run: "minikube start -p download-only-673448"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-673448
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (20.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-909419 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-909419 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (20.531595401s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (20.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-909419
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-909419: exit status 85 (62.826445ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-290956 | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC |                     |
	|         | -p download-only-290956           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:42 UTC |
	| delete  | -p download-only-290956           | download-only-290956 | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:42 UTC |
	| start   | -o=json --download-only           | download-only-673448 | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC |                     |
	|         | -p download-only-673448           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:42 UTC |
	| delete  | -p download-only-673448           | download-only-673448 | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC | 04 Aug 24 00:42 UTC |
	| start   | -o=json --download-only           | download-only-909419 | jenkins | v1.33.1 | 04 Aug 24 00:42 UTC |                     |
	|         | -p download-only-909419           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/04 00:42:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0804 00:42:45.487092   97915 out.go:291] Setting OutFile to fd 1 ...
	I0804 00:42:45.487220   97915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:42:45.487232   97915 out.go:304] Setting ErrFile to fd 2...
	I0804 00:42:45.487238   97915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 00:42:45.487399   97915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 00:42:45.487983   97915 out.go:298] Setting JSON to true
	I0804 00:42:45.488827   97915 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8709,"bootTime":1722723456,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 00:42:45.488899   97915 start.go:139] virtualization: kvm guest
	I0804 00:42:45.491084   97915 out.go:97] [download-only-909419] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 00:42:45.491258   97915 notify.go:220] Checking for updates...
	I0804 00:42:45.492582   97915 out.go:169] MINIKUBE_LOCATION=19364
	I0804 00:42:45.494009   97915 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 00:42:45.495257   97915 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 00:42:45.496487   97915 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 00:42:45.497909   97915 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0804 00:42:45.500287   97915 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0804 00:42:45.500524   97915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 00:42:45.533113   97915 out.go:97] Using the kvm2 driver based on user configuration
	I0804 00:42:45.533148   97915 start.go:297] selected driver: kvm2
	I0804 00:42:45.533155   97915 start.go:901] validating driver "kvm2" against <nil>
	I0804 00:42:45.533525   97915 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:42:45.533625   97915 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19364-90243/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0804 00:42:45.548420   97915 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0804 00:42:45.548472   97915 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0804 00:42:45.548934   97915 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0804 00:42:45.549087   97915 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0804 00:42:45.549116   97915 cni.go:84] Creating CNI manager for ""
	I0804 00:42:45.549124   97915 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0804 00:42:45.549141   97915 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0804 00:42:45.549207   97915 start.go:340] cluster config:
	{Name:download-only-909419 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-909419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 00:42:45.549305   97915 iso.go:125] acquiring lock: {Name:mkcc17a9dfa096dce19f9b8d6a021ed3865a200f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0804 00:42:45.551015   97915 out.go:97] Starting "download-only-909419" primary control-plane node in "download-only-909419" cluster
	I0804 00:42:45.551045   97915 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0804 00:42:46.154782   97915 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0804 00:42:46.154821   97915 cache.go:56] Caching tarball of preloaded images
	I0804 00:42:46.154996   97915 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0804 00:42:46.157452   97915 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0804 00:42:46.157474   97915 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0804 00:42:46.280555   97915 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:89b2d75682ccec9e5b50b57ad7b65741 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0804 00:42:57.456684   97915 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0804 00:42:57.456786   97915 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19364-90243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0804 00:42:58.184435   97915 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0804 00:42:58.184830   97915 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/download-only-909419/config.json ...
	I0804 00:42:58.184866   97915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/download-only-909419/config.json: {Name:mk166444a79d13a5089786e894dc93784fa33ae5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0804 00:42:58.185026   97915 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0804 00:42:58.185161   97915 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19364-90243/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-909419 host does not exist
	  To start a cluster, run: "minikube start -p download-only-909419"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-909419
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-475575 --alsologtostderr --binary-mirror http://127.0.0.1:46051 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-475575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-475575
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (66.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-135046 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-135046 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.348000459s)
helpers_test.go:175: Cleaning up "offline-crio-135046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-135046
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-135046: (1.010547331s)
--- PASS: TestOffline (66.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-474272
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-474272: exit status 85 (52.62374ms)

                                                
                                                
-- stdout --
	* Profile "addons-474272" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-474272"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-474272
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-474272: exit status 85 (52.063416ms)

                                                
                                                
-- stdout --
	* Profile "addons-474272" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-474272"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (47.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-933588 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-933588 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.817613461s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-933588 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-933588 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-933588 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-933588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-933588
--- PASS: TestCertOptions (47.11s)

                                                
                                    
x
+
TestCertExpiration (278.26s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-362636 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-362636 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m6.767165178s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-362636 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-362636 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (30.437995465s)
helpers_test.go:175: Cleaning up "cert-expiration-362636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-362636
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-362636: (1.050774604s)
--- PASS: TestCertExpiration (278.26s)

                                                
                                    
x
+
TestForceSystemdFlag (45.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-156304 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-156304 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.398140247s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-156304 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-156304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-156304
--- PASS: TestForceSystemdFlag (45.43s)

                                                
                                    
x
+
TestForceSystemdEnv (45.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-974508 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-974508 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.915848913s)
helpers_test.go:175: Cleaning up "force-systemd-env-974508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-974508
--- PASS: TestForceSystemdEnv (45.72s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.17s)

                                                
                                    
x
+
TestErrorSpam/setup (43.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-690690 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-690690 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-690690 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-690690 --driver=kvm2  --container-runtime=crio: (43.8808951s)
--- PASS: TestErrorSpam/setup (43.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (5.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 stop: (2.296301285s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 stop: (1.467232157s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-690690 --log_dir /tmp/nospam-690690 stop: (1.477811212s)
--- PASS: TestErrorSpam/stop (5.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19364-90243/.minikube/files/etc/test/nested/copy/97407/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410514 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-410514 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m0.066551344s)
--- PASS: TestFunctional/serial/StartWithProxy (60.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (49.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410514 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-410514 --alsologtostderr -v=8: (49.264563466s)
functional_test.go:659: soft start took 49.265194127s for "functional-410514" cluster.
--- PASS: TestFunctional/serial/SoftStart (49.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-410514 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-410514 cache add registry.k8s.io/pause:3.3: (1.061562259s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-410514 cache add registry.k8s.io/pause:latest: (1.002986181s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-410514 /tmp/TestFunctionalserialCacheCmdcacheadd_local137300865/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 cache add minikube-local-cache-test:functional-410514
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-410514 cache add minikube-local-cache-test:functional-410514: (1.894727873s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 cache delete minikube-local-cache-test:functional-410514
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-410514
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.369141ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 kubectl -- --context functional-410514 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-410514 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410514 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-410514 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.560247888s)
functional_test.go:757: restart took 34.560387099s for "functional-410514" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-410514 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-410514 logs: (1.51772783s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 logs --file /tmp/TestFunctionalserialLogsFileCmd2652829263/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-410514 logs --file /tmp/TestFunctionalserialLogsFileCmd2652829263/001/logs.txt: (1.507490449s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.5s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-410514 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-410514
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-410514: exit status 115 (269.066802ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.195:30696 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-410514 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-410514 delete -f testdata/invalidsvc.yaml: (1.032143947s)
--- PASS: TestFunctional/serial/InvalidService (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 config get cpus: exit status 14 (53.801186ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 config get cpus: exit status 14 (50.345368ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-410514 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-410514 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 111426: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410514 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-410514 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (130.941863ms)

                                                
                                                
-- stdout --
	* [functional-410514] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:26:57.551091  111316 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:26:57.551345  111316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:26:57.551354  111316 out.go:304] Setting ErrFile to fd 2...
	I0804 01:26:57.551358  111316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:26:57.551550  111316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:26:57.552097  111316 out.go:298] Setting JSON to false
	I0804 01:26:57.553102  111316 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11362,"bootTime":1722723456,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 01:26:57.553165  111316 start.go:139] virtualization: kvm guest
	I0804 01:26:57.555460  111316 out.go:177] * [functional-410514] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0804 01:26:57.557032  111316 notify.go:220] Checking for updates...
	I0804 01:26:57.557052  111316 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 01:26:57.558443  111316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 01:26:57.559765  111316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:26:57.561248  111316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:26:57.562636  111316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 01:26:57.563860  111316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 01:26:57.565475  111316 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:26:57.565909  111316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:26:57.565968  111316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:26:57.581433  111316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0804 01:26:57.581827  111316 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:26:57.582435  111316 main.go:141] libmachine: Using API Version  1
	I0804 01:26:57.582461  111316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:26:57.582768  111316 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:26:57.582953  111316 main.go:141] libmachine: (functional-410514) Calling .DriverName
	I0804 01:26:57.583201  111316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 01:26:57.583484  111316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:26:57.583546  111316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:26:57.598633  111316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0804 01:26:57.599035  111316 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:26:57.599460  111316 main.go:141] libmachine: Using API Version  1
	I0804 01:26:57.599481  111316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:26:57.599803  111316 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:26:57.599968  111316 main.go:141] libmachine: (functional-410514) Calling .DriverName
	I0804 01:26:57.633227  111316 out.go:177] * Using the kvm2 driver based on existing profile
	I0804 01:26:57.634587  111316 start.go:297] selected driver: kvm2
	I0804 01:26:57.634604  111316 start.go:901] validating driver "kvm2" against &{Name:functional-410514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-410514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:26:57.634725  111316 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 01:26:57.637128  111316 out.go:177] 
	W0804 01:26:57.638484  111316 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0804 01:26:57.639737  111316 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410514 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-410514 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-410514 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (140.959803ms)

                                                
                                                
-- stdout --
	* [functional-410514] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:26:52.163483  110649 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:26:52.163581  110649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:26:52.163585  110649 out.go:304] Setting ErrFile to fd 2...
	I0804 01:26:52.163589  110649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:26:52.163870  110649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:26:52.164387  110649 out.go:298] Setting JSON to false
	I0804 01:26:52.165323  110649 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11356,"bootTime":1722723456,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0804 01:26:52.165412  110649 start.go:139] virtualization: kvm guest
	I0804 01:26:52.167898  110649 out.go:177] * [functional-410514] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0804 01:26:52.169707  110649 out.go:177]   - MINIKUBE_LOCATION=19364
	I0804 01:26:52.169810  110649 notify.go:220] Checking for updates...
	I0804 01:26:52.172526  110649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0804 01:26:52.174066  110649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	I0804 01:26:52.175571  110649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	I0804 01:26:52.176987  110649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0804 01:26:52.178351  110649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0804 01:26:52.180307  110649 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:26:52.180922  110649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:26:52.181018  110649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:26:52.196926  110649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0804 01:26:52.197387  110649 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:26:52.197963  110649 main.go:141] libmachine: Using API Version  1
	I0804 01:26:52.197986  110649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:26:52.198370  110649 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:26:52.198627  110649 main.go:141] libmachine: (functional-410514) Calling .DriverName
	I0804 01:26:52.198895  110649 driver.go:392] Setting default libvirt URI to qemu:///system
	I0804 01:26:52.199276  110649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:26:52.199324  110649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:26:52.214589  110649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45275
	I0804 01:26:52.215142  110649 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:26:52.215716  110649 main.go:141] libmachine: Using API Version  1
	I0804 01:26:52.215749  110649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:26:52.216100  110649 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:26:52.216326  110649 main.go:141] libmachine: (functional-410514) Calling .DriverName
	I0804 01:26:52.250877  110649 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0804 01:26:52.252382  110649 start.go:297] selected driver: kvm2
	I0804 01:26:52.252405  110649 start.go:901] validating driver "kvm2" against &{Name:functional-410514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-410514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0804 01:26:52.252551  110649 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0804 01:26:52.254894  110649 out.go:177] 
	W0804 01:26:52.256015  110649 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0804 01:26:52.257297  110649 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-410514 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-410514 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-gf7zs" [8eac7f2f-0011-491f-a3be-5861b6dd7fc8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-gf7zs" [8eac7f2f-0011-491f-a3be-5861b6dd7fc8] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003793719s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.195:32531
functional_test.go:1671: http://192.168.39.195:32531: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-gf7zs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.195:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.195:32531
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1f195d08-75fb-4f32-af24-2650993b5a51] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004463335s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-410514 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-410514 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-410514 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-410514 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3735e23c-1e5d-41e6-a4cd-41847ed0b683] Pending
helpers_test.go:344: "sp-pod" [3735e23c-1e5d-41e6-a4cd-41847ed0b683] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3735e23c-1e5d-41e6-a4cd-41847ed0b683] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003696913s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-410514 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-410514 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-410514 delete -f testdata/storage-provisioner/pod.yaml: (4.178055379s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-410514 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b726f43e-05a0-4667-999d-5f3883a326a1] Pending
helpers_test.go:344: "sp-pod" [b726f43e-05a0-4667-999d-5f3883a326a1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b726f43e-05a0-4667-999d-5f3883a326a1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.010743463s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-410514 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.52s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh -n functional-410514 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 cp functional-410514:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3823129808/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh -n functional-410514 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh -n functional-410514 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-410514 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-zfv7q" [3b0ef233-8651-477c-9f0c-265200fc56e7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-zfv7q" [3b0ef233-8651-477c-9f0c-265200fc56e7] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.00711065s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-410514 exec mysql-64454c8b5c-zfv7q -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-410514 exec mysql-64454c8b5c-zfv7q -- mysql -ppassword -e "show databases;": exit status 1 (264.875858ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-410514 exec mysql-64454c8b5c-zfv7q -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-410514 exec mysql-64454c8b5c-zfv7q -- mysql -ppassword -e "show databases;": exit status 1 (989.255176ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-410514 exec mysql-64454c8b5c-zfv7q -- mysql -ppassword -e "show databases;"
2024/08/04 01:27:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (27.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/97407/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo cat /etc/test/nested/copy/97407/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/97407.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo cat /etc/ssl/certs/97407.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/97407.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo cat /usr/share/ca-certificates/97407.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/974072.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo cat /etc/ssl/certs/974072.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/974072.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo cat /usr/share/ca-certificates/974072.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-410514 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 ssh "sudo systemctl is-active docker": exit status 1 (232.555714ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 ssh "sudo systemctl is-active containerd": exit status 1 (222.782937ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410514 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-410514
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-410514
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410514 image ls --format short --alsologtostderr:
I0804 01:27:07.945691  112024 out.go:291] Setting OutFile to fd 1 ...
I0804 01:27:07.945944  112024 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:07.945955  112024 out.go:304] Setting ErrFile to fd 2...
I0804 01:27:07.945960  112024 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:07.946217  112024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
I0804 01:27:07.946774  112024 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:07.946875  112024 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:07.947251  112024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:07.947297  112024 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:07.963376  112024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
I0804 01:27:07.963919  112024 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:07.964655  112024 main.go:141] libmachine: Using API Version  1
I0804 01:27:07.964706  112024 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:07.965138  112024 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:07.965390  112024 main.go:141] libmachine: (functional-410514) Calling .GetState
I0804 01:27:07.967415  112024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:07.967461  112024 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:07.988457  112024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38227
I0804 01:27:07.988872  112024 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:07.989462  112024 main.go:141] libmachine: Using API Version  1
I0804 01:27:07.989491  112024 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:07.989831  112024 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:07.990054  112024 main.go:141] libmachine: (functional-410514) Calling .DriverName
I0804 01:27:07.990278  112024 ssh_runner.go:195] Run: systemctl --version
I0804 01:27:07.990319  112024 main.go:141] libmachine: (functional-410514) Calling .GetSSHHostname
I0804 01:27:07.993403  112024 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:07.993852  112024 main.go:141] libmachine: (functional-410514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:b3:1e", ip: ""} in network mk-functional-410514: {Iface:virbr1 ExpiryTime:2024-08-04 02:24:17 +0000 UTC Type:0 Mac:52:54:00:70:b3:1e Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-410514 Clientid:01:52:54:00:70:b3:1e}
I0804 01:27:07.993883  112024 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined IP address 192.168.39.195 and MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:07.994023  112024 main.go:141] libmachine: (functional-410514) Calling .GetSSHPort
I0804 01:27:07.994228  112024 main.go:141] libmachine: (functional-410514) Calling .GetSSHKeyPath
I0804 01:27:07.994395  112024 main.go:141] libmachine: (functional-410514) Calling .GetSSHUsername
I0804 01:27:07.994541  112024 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/functional-410514/id_rsa Username:docker}
I0804 01:27:08.120004  112024 ssh_runner.go:195] Run: sudo crictl images --output json
I0804 01:27:08.179996  112024 main.go:141] libmachine: Making call to close driver server
I0804 01:27:08.180024  112024 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:08.180354  112024 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:08.180373  112024 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 01:27:08.180383  112024 main.go:141] libmachine: Making call to close driver server
I0804 01:27:08.180390  112024 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:08.180624  112024 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:08.180641  112024 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 01:27:08.180661  112024 main.go:141] libmachine: (functional-410514) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410514 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kicbase/echo-server           | functional-410514  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-410514  | 083f0f920d1d8 | 3.33kB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/my-image                      | functional-410514  | 00c11985408d1 | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410514 image ls --format table --alsologtostderr:
I0804 01:27:15.738311  112214 out.go:291] Setting OutFile to fd 1 ...
I0804 01:27:15.738447  112214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:15.738458  112214 out.go:304] Setting ErrFile to fd 2...
I0804 01:27:15.738462  112214 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:15.738651  112214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
I0804 01:27:15.739297  112214 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:15.739412  112214 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:15.739789  112214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:15.739840  112214 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:15.755356  112214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
I0804 01:27:15.755888  112214 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:15.756455  112214 main.go:141] libmachine: Using API Version  1
I0804 01:27:15.756480  112214 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:15.756838  112214 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:15.757054  112214 main.go:141] libmachine: (functional-410514) Calling .GetState
I0804 01:27:15.759293  112214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:15.759336  112214 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:15.776387  112214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
I0804 01:27:15.776920  112214 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:15.777515  112214 main.go:141] libmachine: Using API Version  1
I0804 01:27:15.777540  112214 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:15.777937  112214 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:15.778128  112214 main.go:141] libmachine: (functional-410514) Calling .DriverName
I0804 01:27:15.778349  112214 ssh_runner.go:195] Run: systemctl --version
I0804 01:27:15.778373  112214 main.go:141] libmachine: (functional-410514) Calling .GetSSHHostname
I0804 01:27:15.781308  112214 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:15.781746  112214 main.go:141] libmachine: (functional-410514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:b3:1e", ip: ""} in network mk-functional-410514: {Iface:virbr1 ExpiryTime:2024-08-04 02:24:17 +0000 UTC Type:0 Mac:52:54:00:70:b3:1e Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-410514 Clientid:01:52:54:00:70:b3:1e}
I0804 01:27:15.781776  112214 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined IP address 192.168.39.195 and MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:15.781990  112214 main.go:141] libmachine: (functional-410514) Calling .GetSSHPort
I0804 01:27:15.782171  112214 main.go:141] libmachine: (functional-410514) Calling .GetSSHKeyPath
I0804 01:27:15.782327  112214 main.go:141] libmachine: (functional-410514) Calling .GetSSHUsername
I0804 01:27:15.782460  112214 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/functional-410514/id_rsa Username:docker}
I0804 01:27:15.881672  112214 ssh_runner.go:195] Run: sudo crictl images --output json
I0804 01:27:15.950884  112214 main.go:141] libmachine: Making call to close driver server
I0804 01:27:15.950904  112214 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:15.951220  112214 main.go:141] libmachine: (functional-410514) DBG | Closing plugin on server side
I0804 01:27:15.951237  112214 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:15.951256  112214 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 01:27:15.951271  112214 main.go:141] libmachine: Making call to close driver server
I0804 01:27:15.951296  112214 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:15.951577  112214 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:15.951597  112214 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410514 image ls --format json --alsologtostderr:
[{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-410514"],"size":"4943877"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io
/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff
6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"2881e72f3c10a1b0dfa3d43b2761b12dbfb808d55e4e64ec1660a28e42d6bfee","repoDigests":["docker.io/library/2e0e900bb5d8fa94a29f8b2debe3287eed5b60cae22239522ddd48542370eeb2-tmp@sha256:7a050fe861b39e532eb36fd6b49714bdf868ca1ba199719d19e7e2a84ba24b75"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a
6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"083f0f920d1d89aef4baf2aeecc0a64f5b231193241ccec802d0e0fa8dc7f534","repoDigests":["localhost/minikube-local-cache-test@sha256:ce273560b48bc3fca59372df587bb9eefb7ddad28457a0fc99bc3f4e161e1f56"],"repoTags":["localhost/minikube-local-cache-test:functional-410514"],"size":"3330"},{"id":"00c11985408d1b46f387e83737294c7c9c198c820640d1b0db0f305bbc221e12","repoDigests":["localhost/my-image@sha256:7b3c692a6b375c79940de0eaa3d7f5c3d3ed6078cc1903b1574ea305aac07230"],"repoTags":["localhost/my-image:functional-410514"],"size":"1468600"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoT
ags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provi
sioner:v5"],"size":"31470524"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5cc3abe571
7dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410514 image ls --format json --alsologtostderr:
I0804 01:27:15.440502  112190 out.go:291] Setting OutFile to fd 1 ...
I0804 01:27:15.440792  112190 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:15.440804  112190 out.go:304] Setting ErrFile to fd 2...
I0804 01:27:15.440810  112190 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:15.441028  112190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
I0804 01:27:15.441657  112190 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:15.441795  112190 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:15.442198  112190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:15.442260  112190 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:15.457319  112190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44697
I0804 01:27:15.457903  112190 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:15.458547  112190 main.go:141] libmachine: Using API Version  1
I0804 01:27:15.458570  112190 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:15.458937  112190 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:15.459132  112190 main.go:141] libmachine: (functional-410514) Calling .GetState
I0804 01:27:15.461098  112190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:15.461151  112190 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:15.476783  112190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
I0804 01:27:15.477338  112190 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:15.477923  112190 main.go:141] libmachine: Using API Version  1
I0804 01:27:15.477945  112190 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:15.478335  112190 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:15.478536  112190 main.go:141] libmachine: (functional-410514) Calling .DriverName
I0804 01:27:15.478760  112190 ssh_runner.go:195] Run: systemctl --version
I0804 01:27:15.478788  112190 main.go:141] libmachine: (functional-410514) Calling .GetSSHHostname
I0804 01:27:15.481697  112190 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:15.482190  112190 main.go:141] libmachine: (functional-410514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:b3:1e", ip: ""} in network mk-functional-410514: {Iface:virbr1 ExpiryTime:2024-08-04 02:24:17 +0000 UTC Type:0 Mac:52:54:00:70:b3:1e Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-410514 Clientid:01:52:54:00:70:b3:1e}
I0804 01:27:15.482227  112190 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined IP address 192.168.39.195 and MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:15.482365  112190 main.go:141] libmachine: (functional-410514) Calling .GetSSHPort
I0804 01:27:15.482573  112190 main.go:141] libmachine: (functional-410514) Calling .GetSSHKeyPath
I0804 01:27:15.482730  112190 main.go:141] libmachine: (functional-410514) Calling .GetSSHUsername
I0804 01:27:15.482877  112190 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/functional-410514/id_rsa Username:docker}
I0804 01:27:15.592283  112190 ssh_runner.go:195] Run: sudo crictl images --output json
I0804 01:27:15.683929  112190 main.go:141] libmachine: Making call to close driver server
I0804 01:27:15.683948  112190 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:15.684257  112190 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:15.684276  112190 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 01:27:15.684279  112190 main.go:141] libmachine: (functional-410514) DBG | Closing plugin on server side
I0804 01:27:15.684286  112190 main.go:141] libmachine: Making call to close driver server
I0804 01:27:15.684296  112190 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:15.684514  112190 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:15.684531  112190 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410514 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 083f0f920d1d89aef4baf2aeecc0a64f5b231193241ccec802d0e0fa8dc7f534
repoDigests:
- localhost/minikube-local-cache-test@sha256:ce273560b48bc3fca59372df587bb9eefb7ddad28457a0fc99bc3f4e161e1f56
repoTags:
- localhost/minikube-local-cache-test:functional-410514
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-410514
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410514 image ls --format yaml --alsologtostderr:
I0804 01:27:08.227330  112064 out.go:291] Setting OutFile to fd 1 ...
I0804 01:27:08.227443  112064 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:08.227452  112064 out.go:304] Setting ErrFile to fd 2...
I0804 01:27:08.227456  112064 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:08.227655  112064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
I0804 01:27:08.228255  112064 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:08.228355  112064 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:08.228702  112064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:08.228757  112064 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:08.244198  112064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
I0804 01:27:08.244719  112064 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:08.245292  112064 main.go:141] libmachine: Using API Version  1
I0804 01:27:08.245319  112064 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:08.245805  112064 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:08.246039  112064 main.go:141] libmachine: (functional-410514) Calling .GetState
I0804 01:27:08.248355  112064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:08.248413  112064 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:08.263992  112064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35519
I0804 01:27:08.264502  112064 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:08.264992  112064 main.go:141] libmachine: Using API Version  1
I0804 01:27:08.265012  112064 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:08.265381  112064 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:08.265585  112064 main.go:141] libmachine: (functional-410514) Calling .DriverName
I0804 01:27:08.265840  112064 ssh_runner.go:195] Run: systemctl --version
I0804 01:27:08.265884  112064 main.go:141] libmachine: (functional-410514) Calling .GetSSHHostname
I0804 01:27:08.268864  112064 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:08.269271  112064 main.go:141] libmachine: (functional-410514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:b3:1e", ip: ""} in network mk-functional-410514: {Iface:virbr1 ExpiryTime:2024-08-04 02:24:17 +0000 UTC Type:0 Mac:52:54:00:70:b3:1e Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-410514 Clientid:01:52:54:00:70:b3:1e}
I0804 01:27:08.269292  112064 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined IP address 192.168.39.195 and MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:08.269515  112064 main.go:141] libmachine: (functional-410514) Calling .GetSSHPort
I0804 01:27:08.269690  112064 main.go:141] libmachine: (functional-410514) Calling .GetSSHKeyPath
I0804 01:27:08.269872  112064 main.go:141] libmachine: (functional-410514) Calling .GetSSHUsername
I0804 01:27:08.270046  112064 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/functional-410514/id_rsa Username:docker}
I0804 01:27:08.390429  112064 ssh_runner.go:195] Run: sudo crictl images --output json
I0804 01:27:08.434695  112064 main.go:141] libmachine: Making call to close driver server
I0804 01:27:08.434709  112064 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:08.434987  112064 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:08.435006  112064 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 01:27:08.435029  112064 main.go:141] libmachine: Making call to close driver server
I0804 01:27:08.435006  112064 main.go:141] libmachine: (functional-410514) DBG | Closing plugin on server side
I0804 01:27:08.435038  112064 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:08.435280  112064 main.go:141] libmachine: (functional-410514) DBG | Closing plugin on server side
I0804 01:27:08.435291  112064 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:08.435316  112064 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 ssh pgrep buildkitd: exit status 1 (289.830996ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image build -t localhost/my-image:functional-410514 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-410514 image build -t localhost/my-image:functional-410514 testdata/build --alsologtostderr: (6.437512296s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-410514 image build -t localhost/my-image:functional-410514 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2881e72f3c1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-410514
--> 00c11985408
Successfully tagged localhost/my-image:functional-410514
00c11985408d1b46f387e83737294c7c9c198c820640d1b0db0f305bbc221e12
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-410514 image build -t localhost/my-image:functional-410514 testdata/build --alsologtostderr:
I0804 01:27:08.781804  112127 out.go:291] Setting OutFile to fd 1 ...
I0804 01:27:08.782096  112127 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:08.782107  112127 out.go:304] Setting ErrFile to fd 2...
I0804 01:27:08.782113  112127 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0804 01:27:08.782443  112127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
I0804 01:27:08.783371  112127 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:08.783960  112127 config.go:182] Loaded profile config "functional-410514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0804 01:27:08.784312  112127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:08.784350  112127 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:08.800042  112127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35771
I0804 01:27:08.800554  112127 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:08.801346  112127 main.go:141] libmachine: Using API Version  1
I0804 01:27:08.801380  112127 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:08.801919  112127 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:08.802112  112127 main.go:141] libmachine: (functional-410514) Calling .GetState
I0804 01:27:08.804114  112127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0804 01:27:08.804161  112127 main.go:141] libmachine: Launching plugin server for driver kvm2
I0804 01:27:08.820147  112127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32799
I0804 01:27:08.820689  112127 main.go:141] libmachine: () Calling .GetVersion
I0804 01:27:08.821305  112127 main.go:141] libmachine: Using API Version  1
I0804 01:27:08.821337  112127 main.go:141] libmachine: () Calling .SetConfigRaw
I0804 01:27:08.821764  112127 main.go:141] libmachine: () Calling .GetMachineName
I0804 01:27:08.821995  112127 main.go:141] libmachine: (functional-410514) Calling .DriverName
I0804 01:27:08.822265  112127 ssh_runner.go:195] Run: systemctl --version
I0804 01:27:08.822295  112127 main.go:141] libmachine: (functional-410514) Calling .GetSSHHostname
I0804 01:27:08.825216  112127 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:08.825650  112127 main.go:141] libmachine: (functional-410514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:b3:1e", ip: ""} in network mk-functional-410514: {Iface:virbr1 ExpiryTime:2024-08-04 02:24:17 +0000 UTC Type:0 Mac:52:54:00:70:b3:1e Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:functional-410514 Clientid:01:52:54:00:70:b3:1e}
I0804 01:27:08.825687  112127 main.go:141] libmachine: (functional-410514) DBG | domain functional-410514 has defined IP address 192.168.39.195 and MAC address 52:54:00:70:b3:1e in network mk-functional-410514
I0804 01:27:08.825852  112127 main.go:141] libmachine: (functional-410514) Calling .GetSSHPort
I0804 01:27:08.826038  112127 main.go:141] libmachine: (functional-410514) Calling .GetSSHKeyPath
I0804 01:27:08.826190  112127 main.go:141] libmachine: (functional-410514) Calling .GetSSHUsername
I0804 01:27:08.826327  112127 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/functional-410514/id_rsa Username:docker}
I0804 01:27:08.936165  112127 build_images.go:161] Building image from path: /tmp/build.2330749256.tar
I0804 01:27:08.936241  112127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0804 01:27:08.948930  112127 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2330749256.tar
I0804 01:27:08.953859  112127 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2330749256.tar: stat -c "%s %y" /var/lib/minikube/build/build.2330749256.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2330749256.tar': No such file or directory
I0804 01:27:08.953901  112127 ssh_runner.go:362] scp /tmp/build.2330749256.tar --> /var/lib/minikube/build/build.2330749256.tar (3072 bytes)
I0804 01:27:08.982013  112127 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2330749256
I0804 01:27:08.994108  112127 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2330749256 -xf /var/lib/minikube/build/build.2330749256.tar
I0804 01:27:09.008664  112127 crio.go:315] Building image: /var/lib/minikube/build/build.2330749256
I0804 01:27:09.008738  112127 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-410514 /var/lib/minikube/build/build.2330749256 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0804 01:27:15.134921  112127 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-410514 /var/lib/minikube/build/build.2330749256 --cgroup-manager=cgroupfs: (6.126158079s)
I0804 01:27:15.134989  112127 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2330749256
I0804 01:27:15.151581  112127 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2330749256.tar
I0804 01:27:15.163428  112127 build_images.go:217] Built localhost/my-image:functional-410514 from /tmp/build.2330749256.tar
I0804 01:27:15.163471  112127 build_images.go:133] succeeded building to: functional-410514
I0804 01:27:15.163476  112127 build_images.go:134] failed building to: 
I0804 01:27:15.163508  112127 main.go:141] libmachine: Making call to close driver server
I0804 01:27:15.163524  112127 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:15.163802  112127 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:15.163824  112127 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 01:27:15.163833  112127 main.go:141] libmachine: Making call to close driver server
I0804 01:27:15.163842  112127 main.go:141] libmachine: (functional-410514) Calling .Close
I0804 01:27:15.164088  112127 main.go:141] libmachine: Successfully made call to close driver server
I0804 01:27:15.164110  112127 main.go:141] libmachine: Making call to close connection to plugin binary
I0804 01:27:15.164193  112127 main.go:141] libmachine: (functional-410514) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.907495045s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-410514
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "230.201477ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "50.903566ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "224.367097ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "49.288792ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-410514 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-410514 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-p5x2c" [11d26f29-da28-4b6a-bbbc-f92a2bc87798] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-p5x2c" [11d26f29-da28-4b6a-bbbc-f92a2bc87798] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004709351s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image load --daemon docker.io/kicbase/echo-server:functional-410514 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-410514 image load --daemon docker.io/kicbase/echo-server:functional-410514 --alsologtostderr: (1.626253966s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image load --daemon docker.io/kicbase/echo-server:functional-410514 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-410514 image load --daemon docker.io/kicbase/echo-server:functional-410514 --alsologtostderr: (1.020380624s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-410514
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image load --daemon docker.io/kicbase/echo-server:functional-410514 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image save docker.io/kicbase/echo-server:functional-410514 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image rm docker.io/kicbase/echo-server:functional-410514 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-410514
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 image save --daemon docker.io/kicbase/echo-server:functional-410514 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-410514
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdany-port3658298595/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722734812263635532" to /tmp/TestFunctionalparallelMountCmdany-port3658298595/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722734812263635532" to /tmp/TestFunctionalparallelMountCmdany-port3658298595/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722734812263635532" to /tmp/TestFunctionalparallelMountCmdany-port3658298595/001/test-1722734812263635532
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.736639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  4 01:26 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  4 01:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  4 01:26 test-1722734812263635532
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh cat /mount-9p/test-1722734812263635532
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-410514 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dbbf8a67-2de5-4513-8b7b-b4ded434bafd] Pending
helpers_test.go:344: "busybox-mount" [dbbf8a67-2de5-4513-8b7b-b4ded434bafd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dbbf8a67-2de5-4513-8b7b-b4ded434bafd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dbbf8a67-2de5-4513-8b7b-b4ded434bafd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.00383705s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-410514 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdany-port3658298595/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 service list -o json
functional_test.go:1490: Took "336.851607ms" to run "out/minikube-linux-amd64 -p functional-410514 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.195:32493
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.195:32493
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdspecific-port857836961/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.642667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdspecific-port857836961/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 ssh "sudo umount -f /mount-9p": exit status 1 (249.220612ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-410514 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdspecific-port857836961/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3698404952/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3698404952/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3698404952/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T" /mount1: exit status 1 (321.519954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-410514 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-410514 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3698404952/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3698404952/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-410514 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3698404952/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.87s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-410514
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-410514
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-410514
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (212.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-998889 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-998889 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m31.885134277s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (212.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-998889 -- rollout status deployment/busybox: (4.544505233s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-7jqps -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-8wnwt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-v468b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-7jqps -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-8wnwt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-v468b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-7jqps -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-8wnwt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-v468b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-7jqps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-7jqps -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-8wnwt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-8wnwt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-v468b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-998889 -- exec busybox-fc5497c4f-v468b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-998889 -v=7 --alsologtostderr
E0804 01:31:42.265556   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:42.271491   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:42.281782   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:42.302089   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:42.343006   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:42.423663   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:42.584191   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:42.905129   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:43.545949   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:44.826701   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:47.387051   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:31:52.507291   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:32:02.748341   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-998889 -v=7 --alsologtostderr: (56.659260371s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-998889 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp testdata/cp-test.txt ha-998889:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1256674419/001/cp-test_ha-998889.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889:/home/docker/cp-test.txt ha-998889-m02:/home/docker/cp-test_ha-998889_ha-998889-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m02 "sudo cat /home/docker/cp-test_ha-998889_ha-998889-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889:/home/docker/cp-test.txt ha-998889-m03:/home/docker/cp-test_ha-998889_ha-998889-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m03 "sudo cat /home/docker/cp-test_ha-998889_ha-998889-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889:/home/docker/cp-test.txt ha-998889-m04:/home/docker/cp-test_ha-998889_ha-998889-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m04 "sudo cat /home/docker/cp-test_ha-998889_ha-998889-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp testdata/cp-test.txt ha-998889-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1256674419/001/cp-test_ha-998889-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m02:/home/docker/cp-test.txt ha-998889:/home/docker/cp-test_ha-998889-m02_ha-998889.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889 "sudo cat /home/docker/cp-test_ha-998889-m02_ha-998889.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m02:/home/docker/cp-test.txt ha-998889-m03:/home/docker/cp-test_ha-998889-m02_ha-998889-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m03 "sudo cat /home/docker/cp-test_ha-998889-m02_ha-998889-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m02:/home/docker/cp-test.txt ha-998889-m04:/home/docker/cp-test_ha-998889-m02_ha-998889-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m04 "sudo cat /home/docker/cp-test_ha-998889-m02_ha-998889-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp testdata/cp-test.txt ha-998889-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1256674419/001/cp-test_ha-998889-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt ha-998889:/home/docker/cp-test_ha-998889-m03_ha-998889.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889 "sudo cat /home/docker/cp-test_ha-998889-m03_ha-998889.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt ha-998889-m02:/home/docker/cp-test_ha-998889-m03_ha-998889-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m02 "sudo cat /home/docker/cp-test_ha-998889-m03_ha-998889-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m03:/home/docker/cp-test.txt ha-998889-m04:/home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m04 "sudo cat /home/docker/cp-test_ha-998889-m03_ha-998889-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp testdata/cp-test.txt ha-998889-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1256674419/001/cp-test_ha-998889-m04.txt
E0804 01:32:23.229325   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt ha-998889:/home/docker/cp-test_ha-998889-m04_ha-998889.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889 "sudo cat /home/docker/cp-test_ha-998889-m04_ha-998889.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt ha-998889-m02:/home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m02 "sudo cat /home/docker/cp-test_ha-998889-m04_ha-998889-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 cp ha-998889-m04:/home/docker/cp-test.txt ha-998889-m03:/home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 ssh -n ha-998889-m03 "sudo cat /home/docker/cp-test_ha-998889-m04_ha-998889-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.508006126s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-998889 node delete m03 -v=7 --alsologtostderr: (16.645712322s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (355.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-998889 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0804 01:46:42.266016   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
E0804 01:48:05.312680   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-998889 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m54.881741225s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (355.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-998889 --control-plane -v=7 --alsologtostderr
E0804 01:51:42.265724   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-998889 --control-plane -v=7 --alsologtostderr: (1m20.106060444s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-998889 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.05s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-835627 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-835627 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.051380092s)
--- PASS: TestJSONOutput/start/Command (98.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-835627 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-835627 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-835627 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-835627 --output=json --user=testUser: (7.364848005s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-020892 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-020892 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.872503ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f1a35439-b001-4885-baa2-9a97f5a90f1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-020892] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f62fa3d8-826f-4ee6-89c5-3471e62a4667","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"c086f979-618f-4796-b817-e5b23bfc5ed4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1a626892-0c5c-4f08-8303-d857b93a1290","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig"}}
	{"specversion":"1.0","id":"1210909a-12ea-4119-80b9-4cc5e2122e52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube"}}
	{"specversion":"1.0","id":"99dc8564-b20b-49d8-90a8-139f0237c5da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"91cf79eb-b9a3-4153-9273-dfed9a767b37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7f9b582e-93ca-45d5-a5d5-ee4bc9e3b71f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-020892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-020892
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-838103 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-838103 --driver=kvm2  --container-runtime=crio: (43.117387168s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-840571 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-840571 --driver=kvm2  --container-runtime=crio: (42.968367607s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-838103
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-840571
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-840571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-840571
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-840571: (1.010615936s)
helpers_test.go:175: Cleaning up "first-838103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-838103
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-838103: (1.00603162s)
--- PASS: TestMinikubeProfile (88.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-409193 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-409193 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.533771181s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-409193 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-409193 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-425466 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-425466 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.347373406s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-425466 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-425466 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-409193 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-425466 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-425466 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-425466
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-425466: (1.283825302s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-425466
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-425466: (22.15120799s)
--- PASS: TestMountStart/serial/RestartStopped (23.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-425466 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-425466 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229184 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0804 01:56:42.265820   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-229184 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m5.130373325s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-229184 -- rollout status deployment/busybox: (3.96404965s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-bvfvg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-jq4l7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-bvfvg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-jq4l7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-bvfvg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-jq4l7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-bvfvg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-bvfvg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-jq4l7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229184 -- exec busybox-fc5497c4f-jq4l7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-229184 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-229184 -v 3 --alsologtostderr: (53.685368135s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.25s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-229184 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp testdata/cp-test.txt multinode-229184:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp multinode-229184:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3996378525/001/cp-test_multinode-229184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp multinode-229184:/home/docker/cp-test.txt multinode-229184-m02:/home/docker/cp-test_multinode-229184_multinode-229184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m02 "sudo cat /home/docker/cp-test_multinode-229184_multinode-229184-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp multinode-229184:/home/docker/cp-test.txt multinode-229184-m03:/home/docker/cp-test_multinode-229184_multinode-229184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m03 "sudo cat /home/docker/cp-test_multinode-229184_multinode-229184-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp testdata/cp-test.txt multinode-229184-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp multinode-229184-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3996378525/001/cp-test_multinode-229184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp multinode-229184-m02:/home/docker/cp-test.txt multinode-229184:/home/docker/cp-test_multinode-229184-m02_multinode-229184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184 "sudo cat /home/docker/cp-test_multinode-229184-m02_multinode-229184.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp multinode-229184-m02:/home/docker/cp-test.txt multinode-229184-m03:/home/docker/cp-test_multinode-229184-m02_multinode-229184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m03 "sudo cat /home/docker/cp-test_multinode-229184-m02_multinode-229184-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp testdata/cp-test.txt multinode-229184-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp multinode-229184-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3996378525/001/cp-test_multinode-229184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp multinode-229184-m03:/home/docker/cp-test.txt multinode-229184:/home/docker/cp-test_multinode-229184-m03_multinode-229184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184 "sudo cat /home/docker/cp-test_multinode-229184-m03_multinode-229184.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 cp multinode-229184-m03:/home/docker/cp-test.txt multinode-229184-m02:/home/docker/cp-test_multinode-229184-m03_multinode-229184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 ssh -n multinode-229184-m02 "sudo cat /home/docker/cp-test_multinode-229184-m03_multinode-229184-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-229184 node stop m03: (1.447773318s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-229184 status: exit status 7 (426.825769ms)

                                                
                                                
-- stdout --
	multinode-229184
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-229184-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-229184-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-229184 status --alsologtostderr: exit status 7 (435.932746ms)

                                                
                                                
-- stdout --
	multinode-229184
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-229184-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-229184-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0804 01:59:55.317778  129830 out.go:291] Setting OutFile to fd 1 ...
	I0804 01:59:55.317936  129830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:59:55.317948  129830 out.go:304] Setting ErrFile to fd 2...
	I0804 01:59:55.317955  129830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0804 01:59:55.318146  129830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-90243/.minikube/bin
	I0804 01:59:55.318338  129830 out.go:298] Setting JSON to false
	I0804 01:59:55.318366  129830 mustload.go:65] Loading cluster: multinode-229184
	I0804 01:59:55.318487  129830 notify.go:220] Checking for updates...
	I0804 01:59:55.318797  129830 config.go:182] Loaded profile config "multinode-229184": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0804 01:59:55.318815  129830 status.go:255] checking status of multinode-229184 ...
	I0804 01:59:55.319211  129830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:59:55.319289  129830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:59:55.335085  129830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38993
	I0804 01:59:55.335659  129830 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:59:55.336292  129830 main.go:141] libmachine: Using API Version  1
	I0804 01:59:55.336321  129830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:59:55.336724  129830 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:59:55.336947  129830 main.go:141] libmachine: (multinode-229184) Calling .GetState
	I0804 01:59:55.338651  129830 status.go:330] multinode-229184 host status = "Running" (err=<nil>)
	I0804 01:59:55.338668  129830 host.go:66] Checking if "multinode-229184" exists ...
	I0804 01:59:55.339130  129830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:59:55.339193  129830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:59:55.354783  129830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45757
	I0804 01:59:55.355280  129830 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:59:55.355801  129830 main.go:141] libmachine: Using API Version  1
	I0804 01:59:55.355843  129830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:59:55.356162  129830 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:59:55.356346  129830 main.go:141] libmachine: (multinode-229184) Calling .GetIP
	I0804 01:59:55.359554  129830 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 01:59:55.359953  129830 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 01:59:55.359988  129830 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 01:59:55.360313  129830 host.go:66] Checking if "multinode-229184" exists ...
	I0804 01:59:55.360608  129830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:59:55.360675  129830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:59:55.377507  129830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0804 01:59:55.377931  129830 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:59:55.378419  129830 main.go:141] libmachine: Using API Version  1
	I0804 01:59:55.378442  129830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:59:55.378851  129830 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:59:55.379065  129830 main.go:141] libmachine: (multinode-229184) Calling .DriverName
	I0804 01:59:55.379362  129830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:59:55.379386  129830 main.go:141] libmachine: (multinode-229184) Calling .GetSSHHostname
	I0804 01:59:55.382411  129830 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 01:59:55.382835  129830 main.go:141] libmachine: (multinode-229184) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fd:2f:b1", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:56:54 +0000 UTC Type:0 Mac:52:54:00:fd:2f:b1 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:multinode-229184 Clientid:01:52:54:00:fd:2f:b1}
	I0804 01:59:55.382861  129830 main.go:141] libmachine: (multinode-229184) DBG | domain multinode-229184 has defined IP address 192.168.39.183 and MAC address 52:54:00:fd:2f:b1 in network mk-multinode-229184
	I0804 01:59:55.383054  129830 main.go:141] libmachine: (multinode-229184) Calling .GetSSHPort
	I0804 01:59:55.383230  129830 main.go:141] libmachine: (multinode-229184) Calling .GetSSHKeyPath
	I0804 01:59:55.383361  129830 main.go:141] libmachine: (multinode-229184) Calling .GetSSHUsername
	I0804 01:59:55.383484  129830 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184/id_rsa Username:docker}
	I0804 01:59:55.469088  129830 ssh_runner.go:195] Run: systemctl --version
	I0804 01:59:55.477005  129830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:59:55.497417  129830 kubeconfig.go:125] found "multinode-229184" server: "https://192.168.39.183:8443"
	I0804 01:59:55.497446  129830 api_server.go:166] Checking apiserver status ...
	I0804 01:59:55.497482  129830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0804 01:59:55.512595  129830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup
	W0804 01:59:55.523872  129830 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0804 01:59:55.523943  129830 ssh_runner.go:195] Run: ls
	I0804 01:59:55.529269  129830 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I0804 01:59:55.533940  129830 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I0804 01:59:55.533968  129830 status.go:422] multinode-229184 apiserver status = Running (err=<nil>)
	I0804 01:59:55.533982  129830 status.go:257] multinode-229184 status: &{Name:multinode-229184 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:59:55.534007  129830 status.go:255] checking status of multinode-229184-m02 ...
	I0804 01:59:55.534342  129830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:59:55.534380  129830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:59:55.549834  129830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0804 01:59:55.550320  129830 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:59:55.550876  129830 main.go:141] libmachine: Using API Version  1
	I0804 01:59:55.550899  129830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:59:55.551200  129830 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:59:55.551364  129830 main.go:141] libmachine: (multinode-229184-m02) Calling .GetState
	I0804 01:59:55.552940  129830 status.go:330] multinode-229184-m02 host status = "Running" (err=<nil>)
	I0804 01:59:55.552974  129830 host.go:66] Checking if "multinode-229184-m02" exists ...
	I0804 01:59:55.553288  129830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:59:55.553333  129830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:59:55.568440  129830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38939
	I0804 01:59:55.568970  129830 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:59:55.569459  129830 main.go:141] libmachine: Using API Version  1
	I0804 01:59:55.569482  129830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:59:55.569912  129830 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:59:55.570133  129830 main.go:141] libmachine: (multinode-229184-m02) Calling .GetIP
	I0804 01:59:55.573027  129830 main.go:141] libmachine: (multinode-229184-m02) DBG | domain multinode-229184-m02 has defined MAC address 52:54:00:02:16:92 in network mk-multinode-229184
	I0804 01:59:55.573447  129830 main.go:141] libmachine: (multinode-229184-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:16:92", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:02:16:92 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-229184-m02 Clientid:01:52:54:00:02:16:92}
	I0804 01:59:55.573485  129830 main.go:141] libmachine: (multinode-229184-m02) DBG | domain multinode-229184-m02 has defined IP address 192.168.39.130 and MAC address 52:54:00:02:16:92 in network mk-multinode-229184
	I0804 01:59:55.573573  129830 host.go:66] Checking if "multinode-229184-m02" exists ...
	I0804 01:59:55.573993  129830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:59:55.574050  129830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:59:55.590107  129830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0804 01:59:55.590513  129830 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:59:55.590991  129830 main.go:141] libmachine: Using API Version  1
	I0804 01:59:55.591013  129830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:59:55.591330  129830 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:59:55.591509  129830 main.go:141] libmachine: (multinode-229184-m02) Calling .DriverName
	I0804 01:59:55.591695  129830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0804 01:59:55.591714  129830 main.go:141] libmachine: (multinode-229184-m02) Calling .GetSSHHostname
	I0804 01:59:55.594279  129830 main.go:141] libmachine: (multinode-229184-m02) DBG | domain multinode-229184-m02 has defined MAC address 52:54:00:02:16:92 in network mk-multinode-229184
	I0804 01:59:55.594618  129830 main.go:141] libmachine: (multinode-229184-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:16:92", ip: ""} in network mk-multinode-229184: {Iface:virbr1 ExpiryTime:2024-08-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:02:16:92 Iaid: IPaddr:192.168.39.130 Prefix:24 Hostname:multinode-229184-m02 Clientid:01:52:54:00:02:16:92}
	I0804 01:59:55.594658  129830 main.go:141] libmachine: (multinode-229184-m02) DBG | domain multinode-229184-m02 has defined IP address 192.168.39.130 and MAC address 52:54:00:02:16:92 in network mk-multinode-229184
	I0804 01:59:55.595034  129830 main.go:141] libmachine: (multinode-229184-m02) Calling .GetSSHPort
	I0804 01:59:55.595228  129830 main.go:141] libmachine: (multinode-229184-m02) Calling .GetSSHKeyPath
	I0804 01:59:55.595374  129830 main.go:141] libmachine: (multinode-229184-m02) Calling .GetSSHUsername
	I0804 01:59:55.595526  129830 sshutil.go:53] new ssh client: &{IP:192.168.39.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19364-90243/.minikube/machines/multinode-229184-m02/id_rsa Username:docker}
	I0804 01:59:55.672983  129830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0804 01:59:55.688358  129830 status.go:257] multinode-229184-m02 status: &{Name:multinode-229184-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0804 01:59:55.688391  129830 status.go:255] checking status of multinode-229184-m03 ...
	I0804 01:59:55.688778  129830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0804 01:59:55.688826  129830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0804 01:59:55.704843  129830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38349
	I0804 01:59:55.705294  129830 main.go:141] libmachine: () Calling .GetVersion
	I0804 01:59:55.705818  129830 main.go:141] libmachine: Using API Version  1
	I0804 01:59:55.705841  129830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0804 01:59:55.706153  129830 main.go:141] libmachine: () Calling .GetMachineName
	I0804 01:59:55.706361  129830 main.go:141] libmachine: (multinode-229184-m03) Calling .GetState
	I0804 01:59:55.707993  129830 status.go:330] multinode-229184-m03 host status = "Stopped" (err=<nil>)
	I0804 01:59:55.708010  129830 status.go:343] host is not running, skipping remaining checks
	I0804 01:59:55.708018  129830 status.go:257] multinode-229184-m03 status: &{Name:multinode-229184-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-229184 node start m03 -v=7 --alsologtostderr: (40.159465846s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-229184 node delete m03: (1.680531317s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229184 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-229184 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.221687043s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229184 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-229184
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229184-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-229184-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (64.45335ms)

                                                
                                                
-- stdout --
	* [multinode-229184-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-229184-m02' is duplicated with machine name 'multinode-229184-m02' in profile 'multinode-229184'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229184-m03 --driver=kvm2  --container-runtime=crio
E0804 02:11:42.266132   97407 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-90243/.minikube/profiles/functional-410514/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-229184-m03 --driver=kvm2  --container-runtime=crio: (43.394820875s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-229184
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-229184: exit status 80 (226.780483ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-229184 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-229184-m03 already exists in multinode-229184-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-229184-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.53s)

                                                
                                    
x
+
TestScheduledStopUnix (113.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-529200 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-529200 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.212164251s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-529200 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-529200 -n scheduled-stop-529200
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-529200 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-529200 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-529200 -n scheduled-stop-529200
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-529200
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-529200 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-529200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-529200: exit status 7 (76.208634ms)

                                                
                                                
-- stdout --
	scheduled-stop-529200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-529200 -n scheduled-stop-529200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-529200 -n scheduled-stop-529200: exit status 7 (62.37414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-529200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-529200
--- PASS: TestScheduledStopUnix (113.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (190.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.451567582 start -p running-upgrade-144534 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.451567582 start -p running-upgrade-144534 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m21.539645441s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-144534 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-144534 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.782781839s)
helpers_test.go:175: Cleaning up "running-upgrade-144534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-144534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-144534: (1.153179694s)
--- PASS: TestRunningBinaryUpgrade (190.12s)

                                                
                                    
x
+
TestPause/serial/Start (150.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-141370 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-141370 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m30.917188843s)
--- PASS: TestPause/serial/Start (150.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000030 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-000030 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (64.853651ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-000030] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-90243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-90243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (82.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000030 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-000030 --driver=kvm2  --container-runtime=crio: (1m22.232057094s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-000030 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (82.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000030 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-000030 --no-kubernetes --driver=kvm2  --container-runtime=crio: (4.474884965s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-000030 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-000030 status -o json: exit status 2 (275.107905ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-000030","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-000030
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-000030: (1.087765285s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000030 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-000030 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.432807237s)
--- PASS: TestNoKubernetes/serial/Start (25.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-000030 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-000030 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.846526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (23.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (7.075949665s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.11161674s)
--- PASS: TestNoKubernetes/serial/ProfileList (23.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-000030
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-000030: (1.303169555s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (39.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-000030 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-000030 --driver=kvm2  --container-runtime=crio: (39.847982348s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (39.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-000030 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-000030 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.311353ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (97.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4208737712 start -p stopped-upgrade-866998 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4208737712 start -p stopped-upgrade-866998 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (53.410396855s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4208737712 -p stopped-upgrade-866998 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4208737712 -p stopped-upgrade-866998 stop: (1.461872358s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-866998 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-866998 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.901771329s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (97.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-866998
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    

Test skip (35/215)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
167 TestImageBuild 0
194 TestKicCustomNetwork 0
195 TestKicExistingNetwork 0
196 TestKicCustomSubnet 0
197 TestKicStaticIP 0
229 TestChangeNoneUser 0
232 TestScheduledStopWindows 0
234 TestSkaffold 0
236 TestInsufficientStorage 0
240 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard